CN111210499B - Model rendering method and device - Google Patents

Model rendering method and device Download PDF

Info

Publication number
CN111210499B
CN111210499B CN202010037499.9A CN202010037499A CN111210499B CN 111210499 B CN111210499 B CN 111210499B CN 202010037499 A CN202010037499 A CN 202010037499A CN 111210499 B CN111210499 B CN 111210499B
Authority
CN
China
Prior art keywords
model
ray
detected
rendered
models
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010037499.9A
Other languages
Chinese (zh)
Other versions
CN111210499A (en
Inventor
周海
罗育林
张峰祥
田松林
周子强
陈立
李锐
刘兆平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Southern Power Grid Digital Platform Technology Guangdong Co ltd
Original Assignee
China Southern Power Grid Digital Platform Technology Guangdong Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Southern Power Grid Digital Platform Technology Guangdong Co ltd filed Critical China Southern Power Grid Digital Platform Technology Guangdong Co ltd
Priority to CN202010037499.9A priority Critical patent/CN111210499B/en
Publication of CN111210499A publication Critical patent/CN111210499A/en
Application granted granted Critical
Publication of CN111210499B publication Critical patent/CN111210499B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/506Illumination models

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Generation (AREA)

Abstract

The invention provides a model rendering method and a device, wherein the method comprises the following steps: receiving manual marking operation, and determining a model to be detected from models included in a virtual reality scene based on the manual marking operation; and determining a model to be rendered from the model to be detected through light detection, and rendering the model to be rendered. By the method and the device, unnecessary rendering is reduced, the display effect is not affected, the rendering efficiency is improved, and system resources are saved.

Description

Model rendering method and device
Technical Field
The present invention relates to the field of virtual reality technologies, and in particular, to a model rendering method and apparatus.
Background
Virtual reality refers to a high-tech means using computer technology as a core to generate a realistic visual, auditory, tactile and other integrated virtual environment. The user can also interact with the object in the virtual reality through the display terminal. In order to realize virtual reality, the virtual reality scene needs to be digitally described, a three-dimensional model of the virtual reality scene is established, and then the three-dimensional model is further rendered. However, when there are more three-dimensional models to be rendered, there are more time and system resources required.
Disclosure of Invention
The invention mainly aims to provide a method for solving the technical problem that a large amount of time and system resources are consumed for rendering a large amount of three-dimensional models in the prior art.
To achieve the above object, the present invention provides a model rendering method, the method comprising:
receiving manual marking operation, and determining a model to be detected from models included in a virtual reality scene based on the manual marking operation;
and determining a model to be rendered from the model to be detected through light detection, and rendering the model to be rendered.
Optionally, the determining, by light detection, the model to be rendered from the models to be detected includes:
and controlling a camera to emit a light ray to each pixel point on a screen where the virtual reality scene is located, and taking the model irradiated by the light ray in the model to be detected as the model to be rendered.
Optionally, the step of taking the model irradiated by the light in the model to be detected as the model to be rendered includes:
detecting whether each model in the models to be detected has an intersection point with at least one ray;
and taking a model with an intersection point with at least one ray in the models to be detected as a model to be rendered.
Optionally, the detecting whether each of the models to be detected has an intersection with at least one light ray includes:
detecting whether each surface of each model in the models to be detected has an intersection point with at least one ray;
and taking the model with the intersection point of at least one surface and at least one ray as the model with the intersection point of at least one ray.
Optionally, the detecting whether each face of each model in the models to be detected has an intersection with at least one light ray includes:
and combining the parameter equation of any light ray with the parameter equation of any surface, wherein the parameter equation of any light ray is as follows: p=p 0 +ut, the parametric equation for either side is n· (a-P) =0, givingWhen t is largeWhen the value is equal to zero, the surface and the light have an intersection point; wherein P is 0 The initial coordinate of the light ray, u is the direction vector of the light ray, different points of the light ray can be obtained by different t values, n is the normal vector of the plane, P is the normal vector and the coordinate of the focal point of the plane, and a is the coordinate of another point in the plane.
In addition, to achieve the above object, the present invention also provides a model rendering apparatus, including:
the marking module is used for receiving manual marking operation, and determining a model to be detected from models included in the virtual reality scene based on the manual marking operation;
and the rendering module is used for determining a model to be rendered from the models to be detected through light detection and rendering the models to be rendered.
Optionally, the rendering module is configured to:
and controlling a camera to emit a light ray to each pixel point on a screen where the virtual reality scene is located, and taking the model irradiated by the light ray in the model to be detected as the model to be rendered.
Optionally, the rendering module is configured to:
detecting whether each model in the models to be detected has an intersection point with at least one ray;
and taking a model with an intersection point with at least one ray in the models to be detected as a model to be rendered.
Optionally, the rendering module is configured to:
detecting whether each surface of each model in the models to be detected has an intersection point with at least one ray;
and taking the model with the intersection point of at least one surface and at least one ray as the model with the intersection point of at least one ray.
Optionally, the rendering module is configured to:
and combining the parameter equation of any light ray with the parameter equation of any surface, wherein the parameter equation of any light ray is as follows: p=p 0 +ut, the parametric equation for either side is n· (a-P) =0, givingWhen t is greater than or equal to zero, the plane and the light have an intersection point; wherein P is 0 The initial coordinate of the light ray, u is the direction vector of the light ray, different points of the light ray can be obtained by different t values, n is the normal vector of the plane, P is the normal vector and the coordinate of the focal point of the plane, and a is the coordinate of another point in the plane.
In the method, manual marking operation is received, and a model to be detected is determined from models included in a virtual reality scene based on the manual marking operation; and determining a model to be rendered from the model to be detected through light detection, and rendering the model to be rendered. By the method and the device, unnecessary rendering is reduced, the display effect is not affected, the rendering efficiency is improved, and system resources are saved.
Drawings
FIG. 1 is a flow chart of an embodiment of a model rendering method according to the present invention;
FIG. 2 is a schematic diagram of an effect of rendering a virtual reality scene based on the prior art;
FIG. 3 is a schematic view of an effect of rendering a virtual reality scene based on the model rendering method of the present invention;
FIG. 4 is a schematic view of rendering a virtual reality scene based on the prior art;
FIG. 5 is a schematic view of rendering a virtual reality scene based on the model rendering method of the present invention;
FIG. 6 is a schematic diagram illustrating functional blocks of an embodiment of a model rendering apparatus according to the present invention.
The achievement of the objects, functional features and advantages of the present invention will be further described with reference to the accompanying drawings, in conjunction with the embodiments.
Detailed Description
It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
Referring to fig. 1, fig. 1 is a flowchart illustrating an embodiment of a model rendering method according to the present invention. In one embodiment, a model rendering method includes:
step S10, receiving manual marking operation, and determining a model to be detected from models included in a virtual reality scene based on the manual marking operation;
in this embodiment, a virtual reality scene generally includes a plurality of three-dimensional models, some of which are relatively large and are not occluded by other objects, so default to this type of three-dimensional model is to be rendered. While other three-dimensional models may be blocked by other objects due to their relatively small size, in this case, even if the blocked three-dimensional model is rendered, it cannot be seen by human eyes, and therefore, it is unnecessary to render the blocked three-dimensional model. Therefore, a three-dimensional model which is not an occlusion object needs to be marked by a manual marking mode to carry out further judgment. Namely, marking the three-dimensional model which is not a shielding object through manual marking operation, and taking the marked three-dimensional model as a model to be detected.
And S20, determining a model to be rendered from the models to be detected through light detection, and rendering the models to be rendered.
In this embodiment, through light detection, it is detected whether each model included in the model to be detected is a visible model, and if one or more models are visible models, the one or more models are used as the model to be rendered, and the model to be rendered is rendered. Of course, the large three-dimensional model which is not marked in the step S10 and is not blocked by other objects needs to be rendered, and the larger model is not marked and defaults to be rendered.
Further, in an embodiment, step S20 includes:
and controlling a camera to emit a light ray to each pixel point on a screen where the virtual reality scene is located, and taking the model irradiated by the light ray in the model to be detected as the model to be rendered.
In this embodiment, the virtual reality scene is displayed on the screen, and if the screen has M pixels, the camera is controlled to emit one light ray to each pixel, i.e. M light rays are emitted in total. Assuming that the model to be detected comprises 15 three-dimensional models, if it is detected that 10 three-dimensional models are all irradiated by at least one ray, the 10 three-dimensional models are taken as the model to be rendered.
Further, in an embodiment, the method for using the model irradiated by the light in the model to be detected as the model to be rendered includes:
detecting whether each model in the models to be detected has an intersection point with at least one ray; and taking a model with an intersection point with at least one ray in the models to be detected as a model to be rendered.
In this embodiment, whether each model is illuminated by at least one ray is determined by detecting whether each model has an intersection with at least one ray. If a model has an intersection point with at least one ray, determining that the model is irradiated by at least one ray, and considering the model as a model to be rendered.
Further, in an embodiment, the detecting whether each of the models to be detected has an intersection with at least one ray includes:
detecting whether each surface of each model in the models to be detected has an intersection point with at least one ray; and taking the model with the intersection point of at least one surface and at least one ray as the model with the intersection point of at least one ray.
In this embodiment, since each model is a three-dimensional model, each model has multiple faces, it is necessary to detect, for one model, whether each face of the model has an intersection with at least one ray, and if at least one of the faces included in the model has an intersection with at least one ray, it is determined that the model has an intersection with at least one ray. For example, the model to be detected includes 15 three-dimensional models, respectively, three-dimensional model 1 to three-dimensional model 15, wherein three-dimensional model 1 includes 5 faces, respectively, face 1 to face 5, and then it is detected whether face 1 has an intersection with at least one light ray, face 2 has an intersection with at least one light ray, face 3 has an intersection with at least one light ray, face 4 has an intersection with at least one light ray, and face 5 has an intersection with at least one light ray. If at least one of the surfaces 1 to 5 has an intersection point with at least one light ray, the three-dimensional model 1 is considered to have an intersection point with at least one light ray, that is, the three-dimensional model 1 is considered to be irradiated by at least one light ray, that is, the three-dimensional model 1 is considered to be a model to be rendered. If the three-dimensional model 2 includes 6 surfaces, namely, a surface 1 to a surface 6 respectively, and by detecting that the intersection points between the surface 1 and the surface 6 and the light do not exist, the three-dimensional model 2 is considered to be not irradiated by the light, the three-dimensional model 2 is considered to be an invisible model, and the three-dimensional model 2 does not need to be determined as a model to be rendered. And the like, whether each model included in the model to be detected is the model to be rendered or not can be determined.
Further, in an embodiment, detecting whether each face of each of the models to be detected has an intersection with at least one ray includes:
and combining the parameter equation of any light ray with the parameter equation of any surface, wherein the parameter equation of any light ray is as follows: p=p 0 +ut, the parametric equation for either side is n· (a-P) =0, givingWhen t is greater than or equal to zero, the plane and the light have an intersection point; wherein P is 0 The initial coordinate of the light ray, u is the direction vector of the light ray, different points of the light ray can be obtained by different t values, n is the normal vector of the plane, P is the normal vector and the coordinate of the focal point of the plane, and a is the coordinate of another point in the plane.
In this embodiment, a three-dimensional model of the model to be detected is taken as an example, and if the three-dimensional model is a 6-plane body, the three-dimensional model includes 6 planes, which are denoted as planes 1 to 6. If the screen has M pixels, there are M total rays. Taking an example of detecting whether the plane 1 has an intersection with at least one ray, assuming that the plane 1 has an intersection with any ray, the parameter equation of any ray is p=p 0 +ut, the parametric equation for plane 1 is n· (a-P) =0, where P 0 The initial coordinate of the light ray is u, the direction vector of the light ray is u, different points of the light ray can be obtained by different t values, n is the normal vector of the plane 1, P is the coordinate of the normal vector and the focal point of the plane 1, namely, the coordinate of the intersection point of the plane 1 and any light ray is P, and a is the coordinate of another point in the plane 1. Parameters of any ray of lightEquation and parameter equation of plane 1 can be obtainedBy combining n, a, P 0 The specific terms of u are substituted to obtain the value of t. Wherein each ray has a corresponding P 0 U, respectively, light ray 1 to P of light ray M 0 And u is substituted. When substituted P 0 U is P corresponding to ray x 0 When u, t is greater than or equal to zero, the intersection point exists between the description surface 1 and the light ray x, namely, the intersection point exists between at least one surface of the three-dimensional model and at least one light ray, the three-dimensional model is determined to be irradiated by at least one light ray, and the three-dimensional model is considered to be the model to be rendered. If the light rays 1 to M are respectively P 0 After substituting u, the calculated values of M t are smaller than zero, and if no intersection point exists between the surface 1 and M rays, whether the surface 2 has an intersection point with at least one ray is continuously monitored according to the mode. If the intersection points of the surface 1 and the surface 6 and the M rays are detected, the three-dimensional model is not irradiated by the rays, and the three-dimensional model is considered to be a model which does not need rendering.
Referring to fig. 2, fig. 2 is a schematic diagram of an effect of rendering a virtual reality scene based on the prior art. Referring to fig. 3, fig. 3 is a schematic view illustrating an effect of rendering a virtual reality scene based on the model rendering method of the present invention. As can be seen from the comparison between fig. 2 and fig. 3, the number of the three-dimensional models rendered in fig. 2 is far greater than that of the three-dimensional models rendered in fig. 3, and in fig. 3, some invisible models are selectively not rendered, so that unnecessary rendering is reduced, the display effect is not affected, the rendering efficiency is improved, and the system resources are saved.
Referring to fig. 4, fig. 4 is a schematic view of a scene based on the prior art for rendering a virtual reality scene. As shown in fig. 4, in the overdraw mode, the shielding condition of the object in the scene is checked, and the brightness of the area marked by the rectangular frame is high, which indicates that the object in the area is seriously overlapped, that is, the object which is not irradiated by the light is many. The status information of the scene was quantitatively checked through the status information panel, and the number of batches was 296 (the higher this value, the more stuck) from the status information, the number of triangles was about 42 ten thousand, and the number of vertices was about 62 ten thousand.
Referring to fig. 5, fig. 5 is a schematic view of a scene for rendering a virtual reality scene based on the model rendering method of the present invention. As shown in fig. 5, in the overdraw mode, the brightness of the area marked by the rectangular box is significantly reduced, which indicates that the object overlap in this area has been eliminated, that is, the object that is not irradiated by the light is not rendered. The status information of the scene is quantitatively checked through the status information panel, the number of batches visible from the status information is reduced to 62, the number of triangles is reduced to about 15 ten thousand, and the number of vertices is reduced to about 21 ten thousand. By the model rendering method provided by the invention, the batch number, the triangle number and the vertex number are reduced, the lower the values are, the smoother the work operates, otherwise, the work is blocked.
In the embodiment, receiving a manual marking operation, and determining a model to be detected from models included in a virtual reality scene based on the manual marking operation; and determining a model to be rendered from the model to be detected through light detection, and rendering the model to be rendered. By the embodiment, unnecessary rendering is reduced, the display effect is not affected, the rendering efficiency is improved, and system resources are saved.
Referring to fig. 6, fig. 6 is a schematic functional block diagram of an embodiment of a model rendering apparatus according to the present invention. In an embodiment, a model rendering apparatus includes:
the marking module 10 is used for receiving manual marking operation, and determining a model to be detected from models included in the virtual reality scene based on the manual marking operation;
and the rendering module 20 is configured to determine a model to be rendered from the models to be detected through light detection, and render the models to be rendered.
Further, in an embodiment, the rendering module 20 is configured to:
and controlling a camera to emit a light ray to each pixel point on a screen where the virtual reality scene is located, and taking the model irradiated by the light ray in the model to be detected as the model to be rendered.
Further, in an embodiment, the rendering module 20 is configured to:
detecting whether each model in the models to be detected has an intersection point with at least one ray;
and taking a model with an intersection point with at least one ray in the models to be detected as a model to be rendered.
Further, in an embodiment, the rendering module 20 is configured to:
detecting whether each surface of each model in the models to be detected has an intersection point with at least one ray;
and taking the model with the intersection point of at least one surface and at least one ray as the model with the intersection point of at least one ray.
Further, in an embodiment, the rendering module 20 is configured to:
and combining the parameter equation of any light ray with the parameter equation of any surface, wherein the parameter equation of any light ray is as follows: p=p 0 +ut, the parametric equation for either side is n· (a-P) =0, givingWhen t is greater than or equal to zero, the plane and the light have an intersection point; wherein P is 0 The initial coordinate of the light ray, u is the direction vector of the light ray, different points of the light ray can be obtained by different t values, n is the normal vector of the plane, P is the normal vector and the coordinate of the focal point of the plane, and a is the coordinate of another point in the plane.
The specific embodiment of the model rendering device is basically the same as each embodiment of the model rendering method, and is not described herein.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The foregoing embodiment numbers of the present invention are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) as described above, comprising instructions for causing a terminal device (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method according to the embodiments of the present invention.
The foregoing description is only of the preferred embodiments of the present invention, and is not intended to limit the scope of the invention, but rather is intended to cover any equivalents of the structures or equivalent processes disclosed herein or in the alternative, which may be employed directly or indirectly in other related arts.

Claims (10)

1. A method of model rendering, the method comprising:
step S10, receiving manual marking operation, and determining a model to be detected from models included in a virtual reality scene based on the manual marking operation;
a virtual reality scene includes multiple three-dimensional models, where a large three-dimensional model is not occluded by other objects, defaulting to this type of three-dimensional model to be rendered; marking the three-dimensional model which is not a shielding object through manual marking operation, and taking the marked three-dimensional model as a model to be detected;
step S20, determining a model to be rendered from the models to be detected through light detection, and rendering the model to be rendered; the large three-dimensional model which is not marked in the step S10 and is not blocked by other objects is also required to be rendered, and if the large model is not marked, the large model is required to be rendered by default.
2. The method of claim 1, wherein the determining the model to be rendered from the model to be detected by ray detection comprises:
and controlling a camera to emit a light ray to each pixel point on a screen where the virtual reality scene is located, and taking the model irradiated by the light ray in the model to be detected as the model to be rendered.
3. The method of claim 2, wherein the illuminating the model of the model to be detected that is illuminated by the light as the model to be rendered comprises:
detecting whether each model in the models to be detected has an intersection point with at least one ray;
and taking a model with an intersection point with at least one ray in the models to be detected as a model to be rendered.
4. The method of claim 3, wherein said detecting whether each of the models to be detected has an intersection with at least one ray of light comprises:
detecting whether each surface of each model in the models to be detected has an intersection point with at least one ray;
and taking the model with the intersection point of at least one surface and at least one ray as the model with the intersection point of at least one ray.
5. The method of claim 4, wherein detecting whether each face of each of the models to be detected intersects at least one ray comprises:
and combining the parameter equation of any light ray with the parameter equation of any surface, wherein the parameter equation of any light ray is as follows: p=p 0 +ut, the parameter equation of either side is n· (a-P) =0, getTo the point ofWhen t is greater than or equal to zero, the plane and the light have an intersection point; wherein P is 0 The initial coordinate of the light ray, u is the direction vector of the light ray, different points of the light ray can be obtained by different t values, n is the normal vector of the plane, P is the normal vector and the coordinate of the focal point of the plane, and a is the coordinate of another point in the plane.
6. A model rendering apparatus, the apparatus comprising:
the marking module is used for receiving manual marking operation, and determining a model to be detected from models included in the virtual reality scene based on the manual marking operation; a virtual reality scene includes multiple three-dimensional models, where a large three-dimensional model is not occluded by other objects, defaulting to this type of three-dimensional model to be rendered; marking the three-dimensional model which is not a shielding object through manual marking operation, and taking the marked three-dimensional model as a model to be detected;
the rendering module is used for determining a model to be rendered from the models to be detected through light detection and rendering the models to be rendered; it is also necessary to render a large three-dimensional model that is not marked by the marking module and is not occluded by other objects, which is default to be rendered.
7. The apparatus of claim 6, wherein the rendering module is to:
and controlling a camera to emit a light ray to each pixel point on a screen where the virtual reality scene is located, and taking the model irradiated by the light ray in the model to be detected as the model to be rendered.
8. The apparatus of claim 7, wherein the rendering module is to:
detecting whether each model in the models to be detected has an intersection point with at least one ray;
and taking a model with an intersection point with at least one ray in the models to be detected as a model to be rendered.
9. The apparatus of claim 8, wherein the rendering module is to:
detecting whether each surface of each model in the models to be detected has an intersection point with at least one ray;
and taking the model with the intersection point of at least one surface and at least one ray as the model with the intersection point of at least one ray.
10. The apparatus of claim 9, wherein the rendering module is to:
and combining the parameter equation of any light ray with the parameter equation of any surface, wherein the parameter equation of any light ray is as follows: p=p 0 +ut, the parametric equation for either side is n· (a-P) =0, givingWhen t is greater than or equal to zero, the plane and the light have an intersection point; wherein P is 0 The initial coordinate of the light ray, u is the direction vector of the light ray, different points of the light ray can be obtained by different t values, n is the normal vector of the plane, P is the normal vector and the coordinate of the focal point of the plane, and a is the coordinate of another point in the plane.
CN202010037499.9A 2020-01-14 2020-01-14 Model rendering method and device Active CN111210499B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010037499.9A CN111210499B (en) 2020-01-14 2020-01-14 Model rendering method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010037499.9A CN111210499B (en) 2020-01-14 2020-01-14 Model rendering method and device

Publications (2)

Publication Number Publication Date
CN111210499A CN111210499A (en) 2020-05-29
CN111210499B true CN111210499B (en) 2023-08-25

Family

ID=70785380

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010037499.9A Active CN111210499B (en) 2020-01-14 2020-01-14 Model rendering method and device

Country Status (1)

Country Link
CN (1) CN111210499B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102629389A (en) * 2012-02-24 2012-08-08 福建天趣网络科技有限公司 3D scene cutting method based on scanning ray
CN107168534A (en) * 2017-05-12 2017-09-15 杭州隅千象科技有限公司 It is a kind of that optimization method and projecting method are rendered based on CAVE systems
CN107665501A (en) * 2016-07-29 2018-02-06 北京大学 A kind of Real time changing focus ray tracing rendering engine
CN108038816A (en) * 2017-12-20 2018-05-15 浙江煮艺文化科技有限公司 A kind of virtual reality image processing unit and method
CN108205819A (en) * 2016-12-20 2018-06-26 汤姆逊许可公司 For passing through the complicated device and method for illuminating lower path tracing and carrying out scene rendering
CN108404412A (en) * 2018-02-02 2018-08-17 珠海金山网络游戏科技有限公司 The light source management system of a kind of rendering engine of playing from generation to generation, devices and methods therefor
CN109377542A (en) * 2018-09-28 2019-02-22 国网辽宁省电力有限公司锦州供电公司 Threedimensional model rendering method, device and electronic equipment

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102629389A (en) * 2012-02-24 2012-08-08 福建天趣网络科技有限公司 3D scene cutting method based on scanning ray
CN107665501A (en) * 2016-07-29 2018-02-06 北京大学 A kind of Real time changing focus ray tracing rendering engine
CN108205819A (en) * 2016-12-20 2018-06-26 汤姆逊许可公司 For passing through the complicated device and method for illuminating lower path tracing and carrying out scene rendering
CN107168534A (en) * 2017-05-12 2017-09-15 杭州隅千象科技有限公司 It is a kind of that optimization method and projecting method are rendered based on CAVE systems
CN108038816A (en) * 2017-12-20 2018-05-15 浙江煮艺文化科技有限公司 A kind of virtual reality image processing unit and method
CN108404412A (en) * 2018-02-02 2018-08-17 珠海金山网络游戏科技有限公司 The light source management system of a kind of rendering engine of playing from generation to generation, devices and methods therefor
CN109377542A (en) * 2018-09-28 2019-02-22 国网辽宁省电力有限公司锦州供电公司 Threedimensional model rendering method, device and electronic equipment

Also Published As

Publication number Publication date
CN111210499A (en) 2020-05-29

Similar Documents

Publication Publication Date Title
US11050994B2 (en) Virtual reality parallax correction
CN109829981B (en) Three-dimensional scene presentation method, device, equipment and storage medium
WO2017092303A1 (en) Virtual reality scenario model establishing method and device
US9147270B1 (en) Bounding plane-based techniques for improved sample test efficiency in image rendering
CN100468462C (en) Shadows plotting method and rendering device thereof
EP3474236A1 (en) Image processing device
US6144387A (en) Guard region and hither plane vertex modification for graphics rendering
CN112184873B (en) Fractal graph creation method, fractal graph creation device, electronic equipment and storage medium
CN112200902A (en) Image rendering method and device, electronic equipment and storage medium
CN111275801A (en) Three-dimensional picture rendering method and device
CN112884874A (en) Method, apparatus, device and medium for applying decals on virtual model
CN116485984B (en) Global illumination simulation method, device, equipment and medium for panoramic image vehicle model
US20230027519A1 (en) Image based sampling metric for quality assessment
US20030146922A1 (en) System and method for diminished reality
WO2012047622A2 (en) Backface culling for motion blur and depth of field
CN111210499B (en) Model rendering method and device
US8098264B2 (en) Method and apparatus for rendering computer graphics primitive
US10964096B2 (en) Methods for detecting if an object is visible
US8441523B2 (en) Apparatus and method for drawing a stereoscopic image
US8174526B2 (en) Methods and apparatus for rendering or preparing digital objects or portions thereof for subsequent processing
CN114792354B (en) Model processing method and device, storage medium and electronic equipment
CN109949396A (en) A kind of rendering method, device, equipment and medium
JP3703073B2 (en) Graphic display device and method thereof
US20230215108A1 (en) System and method for adaptive volume-based scene reconstruction for xr platform applications
CN112634418B (en) Method and device for detecting mold penetrating visibility of human body model and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 518000 building 501, 502, 601, 602, building D, wisdom Plaza, Qiaoxiang Road, Gaofa community, Shahe street, Nanshan District, Shenzhen City, Guangdong Province

Applicant after: China Southern Power Grid Digital Platform Technology (Guangdong) Co.,Ltd.

Address before: 518000 building 501, 502, 601, 602, building D, wisdom Plaza, Qiaoxiang Road, Gaofa community, Shahe street, Nanshan District, Shenzhen City, Guangdong Province

Applicant before: China Southern Power Grid Shenzhen Digital Power Grid Research Institute Co.,Ltd.

Address after: 518000 building 501, 502, 601, 602, building D, wisdom Plaza, Qiaoxiang Road, Gaofa community, Shahe street, Nanshan District, Shenzhen City, Guangdong Province

Applicant after: China Southern Power Grid Shenzhen Digital Power Grid Research Institute Co.,Ltd.

Address before: 518000 building 501, 502, 601, 602, building D, wisdom Plaza, Qiaoxiang Road, Gaofa community, Shahe street, Nanshan District, Shenzhen City, Guangdong Province

Applicant before: SHENZHEN COMTOP INFORMATION TECHNOLOGY Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant