CN111210499A - Model rendering method and device - Google Patents

Model rendering method and device Download PDF

Info

Publication number
CN111210499A
CN111210499A CN202010037499.9A CN202010037499A CN111210499A CN 111210499 A CN111210499 A CN 111210499A CN 202010037499 A CN202010037499 A CN 202010037499A CN 111210499 A CN111210499 A CN 111210499A
Authority
CN
China
Prior art keywords
model
ray
detected
rendered
light
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010037499.9A
Other languages
Chinese (zh)
Other versions
CN111210499B (en
Inventor
周海
罗育林
张峰祥
田松林
周子强
陈立
李锐
刘兆平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Comtop Information Technology Co Ltd
Original Assignee
Shenzhen Comtop Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Comtop Information Technology Co Ltd filed Critical Shenzhen Comtop Information Technology Co Ltd
Priority to CN202010037499.9A priority Critical patent/CN111210499B/en
Publication of CN111210499A publication Critical patent/CN111210499A/en
Application granted granted Critical
Publication of CN111210499B publication Critical patent/CN111210499B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/506Illumination models

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Generation (AREA)

Abstract

The invention provides a model rendering method and a device, wherein the method comprises the following steps: receiving an artificial marking operation, and determining a model to be detected from models included in a virtual reality scene based on the artificial marking operation; and determining a model to be rendered from the model to be detected through light detection, and rendering the model to be rendered. By the method and the device, unnecessary rendering is reduced, the display effect is not influenced, the rendering efficiency is improved, and system resources are saved.

Description

Model rendering method and device
Technical Field
The invention relates to the technical field of virtual reality, in particular to a model rendering method and device.
Background
Virtual reality refers to a high-tech means taking computer technology as a core to generate a realistic visual, auditory, tactile and other integrated virtual environment. The user can also realize the interaction with the object in the virtual reality through the display terminal. In order to realize virtual reality, a virtual reality scene needs to be digitally described, a three-dimensional model of the virtual reality scene is established, and then the three-dimensional model is further rendered. However, when the three-dimensional model to be rendered is large, the required time and system resources are also large.
Disclosure of Invention
The invention mainly aims to provide a method for rendering a large number of three-dimensional models, and aims to solve the technical problem that a large amount of time and system resources are consumed for rendering the large number of three-dimensional models in the prior art.
In order to achieve the above object, the present invention provides a model rendering method, including:
receiving an artificial marking operation, and determining a model to be detected from models included in a virtual reality scene based on the artificial marking operation;
and determining a model to be rendered from the model to be detected through light detection, and rendering the model to be rendered.
Optionally, the determining, through light detection, a model to be rendered from the model to be detected includes:
and controlling the camera to emit a light ray to each pixel point on the screen where the virtual reality scene is located, and taking the model to be detected, which is irradiated by the light ray, as the model to be rendered.
Optionally, the using, as the model to be rendered, the model to be irradiated by the light in the model to be detected includes:
detecting whether each model in the models to be detected has an intersection point with at least one ray;
and taking the model with the intersection point with at least one light ray in the model to be detected as the model to be rendered.
Optionally, the detecting whether each model in the model to be detected has an intersection point with at least one ray includes:
detecting whether each surface of each model in the model to be detected has an intersection point with at least one ray;
and taking the model with the intersection point of the at least one surface and the at least one ray as the model with the intersection point of the at least one ray.
Optionally, the detecting whether each surface of each model in the model to be detected has an intersection point with at least one ray includes:
and (3) simultaneously establishing a parameter equation of any light and a parameter equation of any surface, wherein the parameter equation of any light is as follows: p ═ P0+ ut, with the parameter equation for either side being n (a-P) ═ 0, resulting in
Figure BDA0002366570690000021
When t is greater than or equal to zero, the surface and the light ray have an intersection point; wherein, P0The initial coordinate of the light ray, u is the direction vector of the light ray, different points of the light ray can be taken according to different values of t, n is the normal vector of the surface, P is the coordinate of the normal vector and the focus of the surface, and a is the coordinate of another point in the surface.
In addition, to achieve the above object, the present invention also provides a model rendering apparatus, including:
the marking module is used for receiving manual marking operation and determining a model to be detected from models included in the virtual reality scene based on the manual marking operation;
and the rendering module is used for determining a model to be rendered from the models to be rendered through light detection and rendering the model to be rendered.
Optionally, the rendering module is configured to:
and controlling the camera to emit a light ray to each pixel point on the screen where the virtual reality scene is located, and taking the model to be detected, which is irradiated by the light ray, as the model to be rendered.
Optionally, the rendering module is configured to:
detecting whether each model in the models to be detected has an intersection point with at least one ray;
and taking the model with the intersection point with at least one light ray in the model to be detected as the model to be rendered.
Optionally, the rendering module is configured to:
detecting whether each surface of each model in the model to be detected has an intersection point with at least one ray;
and taking the model with the intersection point of the at least one surface and the at least one ray as the model with the intersection point of the at least one ray.
Optionally, the rendering module is configured to:
and (3) simultaneously establishing a parameter equation of any light and a parameter equation of any surface, wherein the parameter equation of any light is as follows: p ═ P0+ ut, with the parameter equation for either side being n (a-P) ═ 0, resulting in
Figure BDA0002366570690000022
When t is greater than or equal to zero, the surface and the light ray have an intersection point; wherein, P0The initial coordinate of the light ray, u is the direction vector of the light ray, different points of the light ray can be taken according to different values of t, n is the normal vector of the surface, P is the coordinate of the normal vector and the focus of the surface, and a is the coordinate of another point in the surface.
Receiving an artificial marking operation, and determining a model to be detected from models included in a virtual reality scene based on the artificial marking operation; and determining a model to be rendered from the model to be detected through light detection, and rendering the model to be rendered. By the method and the device, unnecessary rendering is reduced, the display effect is not influenced, the rendering efficiency is improved, and system resources are saved.
Drawings
FIG. 1 is a flowchart illustrating a model rendering method according to an embodiment of the present invention;
FIG. 2 is a diagram illustrating the effect of rendering a virtual reality scene according to the prior art;
FIG. 3 is a schematic diagram illustrating the rendering effect of a virtual reality scene based on the model rendering method of the present invention;
FIG. 4 is a scene schematic diagram of rendering a virtual reality scene based on the prior art;
FIG. 5 is a scene schematic diagram of rendering a virtual reality scene based on the model rendering method of the present invention;
FIG. 6 is a functional block diagram of a model rendering apparatus according to an embodiment of the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Referring to fig. 1, fig. 1 is a flowchart illustrating a model rendering method according to an embodiment of the present invention. In one embodiment, a model rendering method includes:
step S10, receiving manual marking operation, and determining a model to be detected from models included in a virtual reality scene based on the manual marking operation;
in this embodiment, a virtual reality scene generally includes a plurality of three-dimensional models, some of which are larger and are not occluded by other objects, so that by default, this type of three-dimensional model needs to be rendered. However, since the size of other three-dimensional models is relatively small, the three-dimensional models may be hidden by other objects, and in this case, even if the hidden three-dimensional models are rendered, the three-dimensional models cannot be seen by human eyes, and therefore, rendering the hidden three-dimensional models is not necessary. Therefore, it is necessary to mark a three-dimensional model that is not a mask by a manual marking method to make a further determination. Namely, the three-dimensional model which is not the shielding object is marked through manual marking operation, and the marked three-dimensional model is used as the model to be detected.
And step S20, determining a model to be rendered from the models to be rendered through light detection, and rendering the model to be rendered.
In this embodiment, whether each model included in the model to be detected is a visible model or not is detected through light detection, and if one or more models are visible models, the one or more models are used as models to be rendered and the models to be rendered are rendered. Of course, the large three-dimensional model that is not marked in step S10 and is not occluded by other objects needs to be rendered, and such a large model that is not marked is rendered by default.
Further, in one embodiment, step S20 includes:
and controlling the camera to emit a light ray to each pixel point on the screen where the virtual reality scene is located, and taking the model to be detected, which is irradiated by the light ray, as the model to be rendered.
In this embodiment, the virtual reality scene is displayed on the screen, and if there are M pixel points on the screen, the camera is controlled to emit one light to each pixel point, that is, M light is emitted in total. Assuming that the model to be detected comprises 15 three-dimensional models, if it is detected that 10 of the three-dimensional models are all irradiated by at least one light ray, taking the 10 three-dimensional models as the model to be rendered.
Further, in an embodiment, taking the model to be detected, which is irradiated by the light, as the model to be rendered includes:
detecting whether each model in the models to be detected has an intersection point with at least one ray; and taking the model with the intersection point with at least one light ray in the model to be detected as the model to be rendered.
In this embodiment, whether each model is irradiated by at least one ray is determined by detecting whether each model has an intersection with at least one ray. And if the model and the at least one ray have intersection points, determining that the model is irradiated by the at least one ray, and considering the model as the model to be rendered.
Further, in an embodiment, the detecting whether each model of the models to be detected has an intersection with at least one ray includes:
detecting whether each surface of each model in the model to be detected has an intersection point with at least one ray; and taking the model with the intersection point of the at least one surface and the at least one ray as the model with the intersection point of the at least one ray.
In this embodiment, since each model is a three-dimensional model, each model has multiple faces, it is necessary to detect whether each face of the model has an intersection with at least one ray for one model, and if at least one face of the faces included in the model has an intersection with at least one ray, it is determined that the model has an intersection with at least one ray. For example, the model to be detected includes 15 three-dimensional models, which are respectively three-dimensional models 1 to 15, where the three-dimensional model 1 includes 5 faces, which are respectively faces 1 to 5, and then it is detected whether the face 1 has an intersection with at least one ray, whether the face 2 has an intersection with at least one ray, whether the face 3 has an intersection with at least one ray, whether the face 4 has an intersection with at least one ray, and whether the face 5 has an intersection with at least one ray. If at least one of the surfaces 1 to 5 has an intersection point with at least one light ray, the three-dimensional model 1 is considered to have an intersection point with at least one light ray, that is, the three-dimensional model 1 is considered to be irradiated by at least one light ray, that is, the three-dimensional model 1 is considered to be a model to be rendered. If the three-dimensional model 2 comprises 6 surfaces, namely, the surface 1 to the surface 6, and the detection shows that the intersection points of the surfaces 1 to the surface 6 and the light rays do not exist, the three-dimensional model 2 is considered not to be irradiated by the light rays, the three-dimensional model is considered to be an invisible model, and the three-dimensional model 2 does not need to be determined as a model to be rendered. By analogy, whether each model included in the model to be detected is the model to be rendered can be determined.
Further, in an embodiment, detecting whether each surface of each model in the model to be detected has an intersection with at least one ray includes:
and (3) simultaneously establishing a parameter equation of any light and a parameter equation of any surface, wherein the parameter equation of any light is as follows: p ═ P0+ ut, with the parameter equation for either side being n (a-P) ═ 0, resulting in
Figure BDA0002366570690000051
When t is greater than or equal to zero, the surface and the light ray have an intersection point; wherein, P0The method is that the initial coordinate of the ray, u is the direction vector of the ray, the different values of t can be taken to obtain different points of the ray, and n is the surfaceThe vector, P, is the normal vector and the coordinates of the focal point of the surface, and a is the coordinates of another point in the surface.
In this embodiment, a three-dimensional model of the models to be detected is taken as an example, and if the three-dimensional model is a 6-face body, the three-dimensional model includes 6 faces, which are denoted as faces 1 to 6. If the screen has M pixel points, M light rays are counted. Taking the example of detecting whether the plane 1 intersects with at least one ray, assuming that the plane 1 intersects with any ray, the parameter equation of any ray is P ═ P0+ ut, the parametric equation for plane 1 is n · (a-P) ═ 0, where P is0The starting coordinates of the light rays are u, the direction vector of the light rays are u, different points of the light rays can be taken according to different values of t, n is a normal vector of the surface 1, P is the coordinates of the normal vector and the focus of the surface 1, namely P is the coordinates of the intersection point of the surface 1 and any light ray, and a is the coordinates of another point in the surface 1. The parameter equation of any light ray and the parameter equation of the surface 1 are combined to obtain
Figure BDA0002366570690000052
By mixing n, a and P0And the specific finger of u is substituted to obtain the value of t. Wherein each ray has a corresponding P0U, then the light 1 to P of the light M respectively0And u is substituted. When substituted P0U is P corresponding to the ray x0And u, if t is larger than or equal to zero, indicating that the intersection point exists between the surface 1 and the ray x, namely the intersection point exists between at least one surface of the three-dimensional model and at least one ray, determining that the three-dimensional model is irradiated by at least one ray, and considering the three-dimensional model as a model to be rendered. If the light 1 to P of the light M are respectively transmitted0And after u is substituted, calculating that the obtained M t values are all smaller than zero, indicating that the surface 1 and the M rays do not have intersection points, and continuously monitoring whether the surface 2 and at least one ray have intersection points according to the mode. If it is detected that the intersection points of the surface 1 and the surface 6 and the M light rays do not exist, the three-dimensional model is not irradiated by the light rays, and the three-dimensional model is considered to be a model which does not need to be rendered.
Referring to fig. 2, fig. 2 is a schematic diagram illustrating an effect of rendering a virtual reality scene based on the prior art. Referring to fig. 3, fig. 3 is a schematic diagram illustrating the effect of rendering a virtual reality scene based on the model rendering method of the present invention. It can be seen from comparison between fig. 2 and fig. 3 that the number of the three-dimensional models rendered in fig. 2 is much larger than that of the three-dimensional models rendered in fig. 3, and in fig. 3, some invisible models are selectively not rendered, so that unnecessary rendering is reduced, the display effect is not affected, the rendering efficiency is improved, and the system resources are saved.
Referring to fig. 4, fig. 4 is a scene diagram illustrating a virtual reality scene rendered according to the prior art. As shown in fig. 4, in the overlap draw mode, when the occlusion condition of an object in a scene is checked, the brightness of the area marked by the rectangular frame is very high, which indicates that the object overlap in the area is serious, that is, many objects cannot be irradiated by light. Further, the status information of the scene is quantitatively checked through the status information panel, and it is found from the status information that the number of lots is 296 (the higher the value is, the more the lot is stuck), the number of triangles is about 42 ten thousand, and the number of vertices is about 62 ten thousand.
Referring to fig. 5, fig. 5 is a scene schematic diagram of rendering a virtual reality scene based on the model rendering method of the present invention. As shown in fig. 5, in the overlay mode, the brightness of the area marked by the rectangular frame is significantly reduced, which indicates that the object overlap in this area has been eliminated, i.e. the object that cannot be irradiated by the light is not rendered. Through the state information panel, the state information of the scene is quantitatively viewed, the number of batches is reduced to 62, the number of triangles is reduced to about 15 ten thousand, and the number of vertexes is reduced to about 21 ten thousand according to the state information. By the model rendering method provided by the invention, the batch number, the triangle number and the vertex number are reduced, and the lower the values are, the smoother the work operation is, otherwise, the work operation is very unstable.
In the embodiment, an artificial marking operation is received, and a model to be detected is determined from models included in a virtual reality scene based on the artificial marking operation; and determining a model to be rendered from the model to be detected through light detection, and rendering the model to be rendered. By the embodiment, unnecessary rendering is reduced, the display effect is not influenced, the rendering efficiency is improved, and system resources are saved.
Referring to fig. 6, fig. 6 is a functional module diagram of an embodiment of a model rendering apparatus according to the present invention. In one embodiment, a model rendering apparatus includes:
the marking module 10 is configured to receive an artificial marking operation, and determine a model to be detected from models included in a virtual reality scene based on the artificial marking operation;
and the rendering module 20 is configured to determine a model to be rendered from the models to be detected through light detection, and render the model to be rendered.
Further, in an embodiment, the rendering module 20 is configured to:
and controlling the camera to emit a light ray to each pixel point on the screen where the virtual reality scene is located, and taking the model to be detected, which is irradiated by the light ray, as the model to be rendered.
Further, in an embodiment, the rendering module 20 is configured to:
detecting whether each model in the models to be detected has an intersection point with at least one ray;
and taking the model with the intersection point with at least one light ray in the model to be detected as the model to be rendered.
Further, in an embodiment, the rendering module 20 is configured to:
detecting whether each surface of each model in the model to be detected has an intersection point with at least one ray;
and taking the model with the intersection point of the at least one surface and the at least one ray as the model with the intersection point of the at least one ray.
Further, in an embodiment, the rendering module 20 is configured to:
and (3) simultaneously establishing a parameter equation of any light and a parameter equation of any surface, wherein the parameter equation of any light is as follows: p ═ P0+ ut, with the parameter equation for either side being n (a-P) ═ 0, resulting in
Figure BDA0002366570690000071
When t is greater than or equal to zero, the surface and the light ray have an intersection point; wherein, P0Is the starting seat of lightThe index u is the direction vector of the light ray, different points of the light ray can be taken according to different values of t, n is the normal vector of the surface, P is the coordinate of the normal vector and the focus of the surface, and a is the coordinate of another point in the surface.
The specific embodiment of the model rendering device of the present invention is basically the same as the embodiments of the model rendering method described above, and details are not repeated here.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) as described above and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present invention.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (10)

1. A method of model rendering, the method comprising:
receiving an artificial marking operation, and determining a model to be detected from models included in a virtual reality scene based on the artificial marking operation;
and determining a model to be rendered from the model to be detected through light detection, and rendering the model to be rendered.
2. The method of claim 1, wherein the determining the model to be rendered from the model to be detected through light detection comprises:
and controlling the camera to emit a light ray to each pixel point on the screen where the virtual reality scene is located, and taking the model to be detected, which is irradiated by the light ray, as the model to be rendered.
3. The method of claim 2, wherein the step of using the model to be detected, which is irradiated by the light, as the model to be rendered comprises the steps of:
detecting whether each model in the models to be detected has an intersection point with at least one ray;
and taking the model with the intersection point with at least one light ray in the model to be detected as the model to be rendered.
4. The method of claim 3, wherein the detecting whether each of the patterns to be detected has an intersection with at least one ray comprises:
detecting whether each surface of each model in the model to be detected has an intersection point with at least one ray;
and taking the model with the intersection point of the at least one surface and the at least one ray as the model with the intersection point of the at least one ray.
5. The method of claim 4, wherein said detecting whether each face of each of the patterns to be detected has an intersection with at least one ray comprises:
and (3) simultaneously establishing a parameter equation of any light and a parameter equation of any surface, wherein the parameter equation of any light is as follows: p ═ P0+ ut, with the parameter equation for either side being n (a-P) ═ 0, resulting in
Figure FDA0002366570680000011
When t is greater than or equal to zero, the surface and the light ray have an intersection point; wherein, P0The initial coordinate of the light ray, u is the direction vector of the light ray, different points of the light ray can be taken according to different values of t, n is the normal vector of the surface, P is the coordinate of the normal vector and the focus of the surface, and a is the coordinate of another point in the surface.
6. An apparatus for model rendering, the apparatus comprising:
the marking module is used for receiving manual marking operation and determining a model to be detected from models included in the virtual reality scene based on the manual marking operation;
and the rendering module is used for determining a model to be rendered from the models to be rendered through light detection and rendering the model to be rendered.
7. The apparatus of claim 6, wherein the rendering module is to:
and controlling the camera to emit a light ray to each pixel point on the screen where the virtual reality scene is located, and taking the model to be detected, which is irradiated by the light ray, as the model to be rendered.
8. The apparatus of claim 7, wherein the rendering module is to:
detecting whether each model in the models to be detected has an intersection point with at least one ray;
and taking the model with the intersection point with at least one light ray in the model to be detected as the model to be rendered.
9. The apparatus of claim 8, wherein the rendering module is to:
detecting whether each surface of each model in the model to be detected has an intersection point with at least one ray;
and taking the model with the intersection point of the at least one surface and the at least one ray as the model with the intersection point of the at least one ray.
10. The apparatus of claim 9, wherein the rendering module is to:
and (3) simultaneously establishing a parameter equation of any light and a parameter equation of any surface, wherein the parameter equation of any light is as follows: p ═ P0+ ut, with the parameter equation for either side being n (a-P) ═ 0, resulting in
Figure FDA0002366570680000021
When t is greater than or equal to zero, the surface and the light ray have an intersection point; wherein, P0The initial coordinate of the light ray, u is the direction vector of the light ray, different points of the light ray can be taken according to different values of t, n is the normal vector of the surface, P is the coordinate of the normal vector and the focus of the surface, and a is the coordinate of another point in the surface.
CN202010037499.9A 2020-01-14 2020-01-14 Model rendering method and device Active CN111210499B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010037499.9A CN111210499B (en) 2020-01-14 2020-01-14 Model rendering method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010037499.9A CN111210499B (en) 2020-01-14 2020-01-14 Model rendering method and device

Publications (2)

Publication Number Publication Date
CN111210499A true CN111210499A (en) 2020-05-29
CN111210499B CN111210499B (en) 2023-08-25

Family

ID=70785380

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010037499.9A Active CN111210499B (en) 2020-01-14 2020-01-14 Model rendering method and device

Country Status (1)

Country Link
CN (1) CN111210499B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102629389A (en) * 2012-02-24 2012-08-08 福建天趣网络科技有限公司 3D scene cutting method based on scanning ray
CN107168534A (en) * 2017-05-12 2017-09-15 杭州隅千象科技有限公司 It is a kind of that optimization method and projecting method are rendered based on CAVE systems
CN107665501A (en) * 2016-07-29 2018-02-06 北京大学 A kind of Real time changing focus ray tracing rendering engine
CN108038816A (en) * 2017-12-20 2018-05-15 浙江煮艺文化科技有限公司 A kind of virtual reality image processing unit and method
CN108205819A (en) * 2016-12-20 2018-06-26 汤姆逊许可公司 For passing through the complicated device and method for illuminating lower path tracing and carrying out scene rendering
CN108404412A (en) * 2018-02-02 2018-08-17 珠海金山网络游戏科技有限公司 The light source management system of a kind of rendering engine of playing from generation to generation, devices and methods therefor
CN109377542A (en) * 2018-09-28 2019-02-22 国网辽宁省电力有限公司锦州供电公司 Threedimensional model rendering method, device and electronic equipment

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102629389A (en) * 2012-02-24 2012-08-08 福建天趣网络科技有限公司 3D scene cutting method based on scanning ray
CN107665501A (en) * 2016-07-29 2018-02-06 北京大学 A kind of Real time changing focus ray tracing rendering engine
CN108205819A (en) * 2016-12-20 2018-06-26 汤姆逊许可公司 For passing through the complicated device and method for illuminating lower path tracing and carrying out scene rendering
CN107168534A (en) * 2017-05-12 2017-09-15 杭州隅千象科技有限公司 It is a kind of that optimization method and projecting method are rendered based on CAVE systems
CN108038816A (en) * 2017-12-20 2018-05-15 浙江煮艺文化科技有限公司 A kind of virtual reality image processing unit and method
CN108404412A (en) * 2018-02-02 2018-08-17 珠海金山网络游戏科技有限公司 The light source management system of a kind of rendering engine of playing from generation to generation, devices and methods therefor
CN109377542A (en) * 2018-09-28 2019-02-22 国网辽宁省电力有限公司锦州供电公司 Threedimensional model rendering method, device and electronic equipment

Also Published As

Publication number Publication date
CN111210499B (en) 2023-08-25

Similar Documents

Publication Publication Date Title
CN109829981B (en) Three-dimensional scene presentation method, device, equipment and storage medium
WO2017092303A1 (en) Virtual reality scenario model establishing method and device
EP1898327B1 (en) Part identification image processor, program for generating part identification image, and recording medium storing the same
CN109660783A (en) Virtual reality parallax correction
CN112907760B (en) Three-dimensional object labeling method and device, tool, electronic equipment and storage medium
CN111275801A (en) Three-dimensional picture rendering method and device
CN110619683B (en) Three-dimensional model adjustment method, device, terminal equipment and storage medium
US20230386041A1 (en) Control Method, Device, Equipment and Storage Medium for Interactive Reproduction of Target Object
CN108628442A (en) A kind of information cuing method, device and electronic equipment
US11501410B1 (en) Systems and methods for dynamically rendering three-dimensional images with varying detail to emulate human vision
CN106502396B (en) Virtual reality system, interaction method and device based on virtual reality
JP7262530B2 (en) Location information generation method, related device and computer program product
CN112686939A (en) Depth image rendering method, device and equipment and computer readable storage medium
CN111210499A (en) Model rendering method and device
WO2005076122A1 (en) Method of performing a panoramic demonstration of liquid crystal panel image simulation in view of observer's viewing angle
CN108256477B (en) Method and device for detecting human face
CN112528707A (en) Image processing method, device, equipment and storage medium
CN109949396A (en) A kind of rendering method, device, equipment and medium
US11910068B2 (en) Panoramic render of 3D video
CN115758502A (en) Carving processing method and device of spherical model and computer equipment
CN114693780A (en) Image processing method, device, equipment, storage medium and program product
CN111179332B (en) Image processing method and device, electronic equipment and storage medium
CN113709433A (en) Method, device and equipment for detecting brightness of projection picture and computer storage medium
CN113269782A (en) Data generation method and device and electronic equipment
CN112308766A (en) Image data display method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 518000 building 501, 502, 601, 602, building D, wisdom Plaza, Qiaoxiang Road, Gaofa community, Shahe street, Nanshan District, Shenzhen City, Guangdong Province

Applicant after: China Southern Power Grid Digital Platform Technology (Guangdong) Co.,Ltd.

Address before: 518000 building 501, 502, 601, 602, building D, wisdom Plaza, Qiaoxiang Road, Gaofa community, Shahe street, Nanshan District, Shenzhen City, Guangdong Province

Applicant before: China Southern Power Grid Shenzhen Digital Power Grid Research Institute Co.,Ltd.

Address after: 518000 building 501, 502, 601, 602, building D, wisdom Plaza, Qiaoxiang Road, Gaofa community, Shahe street, Nanshan District, Shenzhen City, Guangdong Province

Applicant after: China Southern Power Grid Shenzhen Digital Power Grid Research Institute Co.,Ltd.

Address before: 518000 building 501, 502, 601, 602, building D, wisdom Plaza, Qiaoxiang Road, Gaofa community, Shahe street, Nanshan District, Shenzhen City, Guangdong Province

Applicant before: SHENZHEN COMTOP INFORMATION TECHNOLOGY Co.,Ltd.

GR01 Patent grant
GR01 Patent grant