CN111275803A - 3D model rendering method, device, equipment and storage medium - Google Patents
3D model rendering method, device, equipment and storage medium Download PDFInfo
- Publication number
- CN111275803A CN111275803A CN202010118219.7A CN202010118219A CN111275803A CN 111275803 A CN111275803 A CN 111275803A CN 202010118219 A CN202010118219 A CN 202010118219A CN 111275803 A CN111275803 A CN 111275803A
- Authority
- CN
- China
- Prior art keywords
- model
- rendered
- viewpoint
- fixed
- determining
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000009877 rendering Methods 0.000 title claims abstract description 139
- 238000000034 method Methods 0.000 title claims abstract description 59
- 230000000007 visual effect Effects 0.000 claims abstract description 83
- 238000012545 processing Methods 0.000 claims abstract description 19
- 230000015654 memory Effects 0.000 claims description 19
- 239000002699 waste material Substances 0.000 abstract description 12
- 238000010586 diagram Methods 0.000 description 13
- 238000010438 heat treatment Methods 0.000 description 9
- 230000009467 reduction Effects 0.000 description 9
- 238000004891 communication Methods 0.000 description 4
- 238000004590 computer program Methods 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 230000008901 benefit Effects 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 230000000694 effects Effects 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 230000003190 augmentative effect Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000007429 general method Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000001151 other effect Effects 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 239000013598 vector Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
Landscapes
- Engineering & Computer Science (AREA)
- Computer Graphics (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Processing Or Creating Images (AREA)
- Controls And Circuits For Display Device (AREA)
Abstract
The application discloses a 3D model rendering method and device, electronic equipment and a storage medium. Relating to the technical field of rendering, the concrete implementation scheme is as follows: determining the current gesture needing to be rendered; determining a plurality of target fixed viewpoints from a plurality of calibrated fixed viewpoints according to the current gesture to be rendered; acquiring a visual patch to be rendered of the 3D model at each target fixed viewpoint; and rendering the 3D model on a Central Processing Unit (CPU) according to the visual patch to be rendered of the 3D model under each target fixed viewpoint. Therefore, the visual surface patches to be rendered under the fixed viewpoints of each target can be obtained, the 3D model can be quickly rendered on the CPU, the resource waste caused by shielding of part of the surface patches by other surface patches during rendering is avoided, the power consumption during hardware operation is reduced, and the rendering speed is increased.
Description
Technical Field
The present application relates to the field of rendering technologies, and in particular, to a 3D model rendering method, apparatus, electronic device, and non-transitory computer-readable storage medium storing computer instructions.
Background
3D rendering is the field of computer graphics rendering 3D models. 3D rendering is used in a variety of application domains including virtual reality, animation films, and movies. The 3D object tracking algorithm is an important algorithm in an Augmented Reality (AR) technology product, wherein in a general 3D object tracking algorithm, a model (such as obj format) of a tracked 3D object needs to be rendered from time to correct the posture of the 3D object tracking algorithm.
In the related art, there are two general methods for rendering a 3D model, one is to render the 3D model by using a GPU (Graphics processing Unit), and the other is to render the 3D model by using a CPU (central processing Unit). When the GPU is used to render the 3D model, when a 3D object is tracked in an AR product, materials such as a model and animation for display (for example, a 3D specification, which may be understood as rendering some description guidance animations according to the posture of the 3D object on a mobile phone screen) need to be synchronously rendered on a display device (for example, a mobile phone screen). When the CPU is adopted to render the 3D model, the CPU is a serial operation unit, so that the 3D model rendering speed is low when the model with a large number of pictures is rendered; due to different modeling of the 3D model, normal vectors of partial patches point to a half space where the viewpoint is located, the possibility of being shielded by other patches still exists, and the partial patches cause resource waste when the 3D model is rendered.
Disclosure of Invention
The present application aims to solve at least one of the technical problems in the related art to some extent.
Therefore, a first objective of the present application is to provide a 3D model rendering method, which can achieve fast rendering of a 3D model on a CPU by obtaining a to-be-rendered visual patch at a fixed viewpoint of each target, avoid resource waste caused by shielding of part of patches by other patches during rendering, reduce power consumption during hardware operation, and accelerate rendering speed.
A second object of the present application is to provide a 3D model rendering apparatus.
A third object of the present application is to provide an electronic device.
A fourth object of the present application is to propose a non-transitory computer readable storage medium storing computer instructions.
A fifth object of the present application is to propose another 3D model rendering method.
In order to achieve the above object, a 3D model rendering method provided in an embodiment of a first aspect of the present application includes:
determining the current gesture needing to be rendered;
determining a plurality of target fixed viewpoints from a plurality of calibrated fixed viewpoints according to the current gesture to be rendered;
acquiring a visual patch to be rendered of the 3D model at each target fixed viewpoint;
and rendering the 3D model on a Central Processing Unit (CPU) according to the visual patch to be rendered of the 3D model under each target fixed viewpoint.
According to an embodiment of the application, the plurality of calibrated fixed viewpoints are obtained in advance by: determining a spherical surface aiming at the 3D model, wherein the spherical surface is the surface of a sphere which is formed by taking the 3D model as a center and taking a preset value as a radius; and uniformly selecting a plurality of points on the spherical surface, and determining the uniformly selected points as the calibrated fixed viewpoints.
According to an embodiment of the application, after obtaining the plurality of calibrated fixed viewpoints, the method further comprises: and obtaining a visual patch of the 3D model under each calibrated fixed viewpoint in advance.
According to an embodiment of the present application, obtaining in advance a visible patch of the 3D model at each calibrated fixed viewpoint includes: rendering the 3D model according to each calibrated fixed viewpoint in sequence; when the rendering of the 3D model is completed each time, acquiring a Z coordinate of each visible pixel in an image space stored in a Z cache region; and obtaining the visual patches of the 3D model under each calibrated fixed viewpoint according to the Z coordinate of each visual pixel in the image space stored in the Z cache region after each rendering.
According to an embodiment of the application, acquiring a to-be-rendered visible patch of the 3D model at each target fixed viewpoint includes: according to each target fixed viewpoint, acquiring a visible patch of the 3D model under each target fixed viewpoint from the visible patch of the 3D model under each calibrated fixed viewpoint; merging the visual patches of the 3D model under each target fixed viewpoint; and determining the merged visual patch as a visual patch to be rendered of the 3D model at each target fixed viewpoint.
According to an embodiment of the present application, determining a plurality of target fixed viewpoints from a plurality of calibrated fixed viewpoints according to the current gesture to be rendered includes: determining a viewpoint corresponding to the current gesture to be rendered according to the current gesture to be rendered; determining a plurality of fixed viewpoints which are closest to the viewpoint corresponding to the current gesture to be rendered from the plurality of calibrated fixed viewpoints; determining the closest plurality of fixed viewpoints as the plurality of target fixed viewpoints.
According to an embodiment of the present application, determining a plurality of fixed viewpoints closest to a viewpoint corresponding to the current gesture to be rendered from the plurality of calibrated fixed viewpoints includes: determining a first connecting line between a viewpoint corresponding to the current gesture to be rendered and a central point of the 3D model; determining a second connection line between each calibrated fixed viewpoint and the central point; calculating included angles between the first connecting lines and the second connecting lines; and determining a plurality of fixed viewpoints which are closest to the viewpoint corresponding to the current gesture to be rendered from the plurality of calibrated fixed viewpoints according to the included angle between the first connecting line and each second connecting line.
In order to achieve the above object, a 3D model rendering apparatus according to an embodiment of a second aspect of the present application includes:
the gesture to be rendered determining module is used for determining the current gesture to be rendered;
the target viewpoint determining module is used for determining a plurality of target fixed viewpoints from a plurality of calibrated fixed viewpoints according to the current gesture to be rendered;
a to-be-rendered visible patch obtaining module, configured to obtain a to-be-rendered visible patch of the 3D model at each target fixed viewpoint;
and the rendering module is used for rendering the 3D model on a Central Processing Unit (CPU) according to the visual patch to be rendered of the 3D model under each target fixed viewpoint.
In order to achieve the above object, an electronic device according to a third aspect of the present application includes: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of rendering a 3D model according to the first aspect of the present application.
To achieve the above object, a non-transitory computer readable storage medium storing computer instructions according to a fourth aspect of the present application includes: the computer instructions are for causing the computer to perform a method of rendering a 3D model according to the first aspect of the present application.
To achieve the above object, a 3D model rendering method provided in a fifth convenient embodiment of the present application includes: determining a plurality of target fixed viewpoints from a plurality of calibrated fixed viewpoints according to the viewpoint corresponding to the current gesture to be rendered; acquiring a visual patch to be rendered of the 3D model at each target fixed viewpoint; and rendering the 3D model according to the visual patch to be rendered of the 3D model under each target fixed viewpoint.
One embodiment in the above application has the following advantages or benefits: the method has the advantages that the multiple target fixed viewpoints are determined from the multiple calibrated fixed viewpoints, and the 3D model can be quickly rendered on the CPU according to the visual surface patches to be rendered under each target fixed viewpoint, so that the resource waste caused by shielding of part of the surface patches by other surface patches during rendering is avoided, the power consumption during hardware operation is reduced, and the rendering speed is increased; meanwhile, the CPU is adopted to render the 3D model, so that the technical problems of mobile phone heating, frequency reduction and the like caused by the fact that an AR application and a rendering engine (a model for rendering and displaying) work together in a GPU are solved, and the user experience is improved. Because a plurality of target fixed viewpoints are determined from a plurality of calibrated fixed viewpoints, then a visual patch to be rendered of the 3D model at each target fixed viewpoint is obtained, and on a Central Processing Unit (CPU), the technical means for rendering the 3D model is carried out according to the visual patch to be rendered of the 3D model at each target fixed viewpoint, the problems of resource waste caused by shielding of part of the patches by other patches during rendering, high power consumption during hardware operation, low rendering speed and the like in the related technology are solved; meanwhile, the CPU is adopted to render the 3D model, so that the technical problems of mobile phone heating, frequency reduction and the like caused by the fact that an AR application and a rendering engine (a model for rendering and displaying) work together in a GPU are solved. The 3D model is quickly rendered on the CPU, so that the resource waste caused by the fact that part of the surface patches are shielded by other surface patches in the rendering process is avoided, the power consumption in the hardware operation is reduced, and the rendering speed is accelerated; meanwhile, the CPU is adopted to render the 3D model, so that the technical problems of mobile phone heating, frequency reduction and the like caused by the fact that the AR application and a rendering engine (a model for rendering and displaying) work together in the GPU are solved, and the technical effect of user experience is improved.
Other effects of the above-described alternative will be described below with reference to specific embodiments.
Drawings
The accompanying drawings are included to provide a further understanding of the present solution and are not intended to limit the present application. Wherein:
FIG. 1 is a flow diagram of a method of 3D model rendering according to one embodiment of the present application.
FIG. 2 is a flow diagram of a method for 3D model rendering according to a specific embodiment of the present application.
FIG. 3 is a flow chart of a manner of obtaining a plurality of calibrated fixed viewpoints according to one embodiment of the present application.
FIG. 4 is a schematic diagram of determining a plurality of fixed viewpoints closest to a viewpoint corresponding to a current gesture to be rendered according to one embodiment of the present application.
FIG. 5 is a flow diagram of a method of 3D model rendering according to another embodiment of the present application.
Fig. 6 is a schematic structural diagram of a 3D model rendering apparatus according to an embodiment of the present application.
Fig. 7 is a schematic structural diagram of a 3D model rendering apparatus according to another embodiment of the present application.
Fig. 8 is a schematic structural diagram of a 3D model rendering apparatus according to still another embodiment of the present application.
FIG. 9 is a schematic structural diagram of an electronic device according to one embodiment of the present application.
Detailed Description
The following description of the exemplary embodiments of the present application, taken in conjunction with the accompanying drawings, includes various details of the embodiments of the application for the understanding of the same, which are to be considered exemplary only. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
The application provides a 3D model rendering method, a device, electronic equipment and a non-transitory computer readable storage medium storing computer instructions, and solves the problems of resource waste, high power consumption during hardware operation, low rendering speed and the like caused by shielding of part of surface patches by other surface patches during rendering in the related art; meanwhile, the CPU is adopted to render the 3D model, so that the technical problems of mobile phone heating, frequency reduction and the like caused by the fact that an AR application and a rendering engine (a model for rendering and displaying) work together in a GPU are solved. In particular, a 3D model rendering method, apparatus, electronic device, and non-transitory computer-readable storage medium storing computer instructions of embodiments of the present application are described below with reference to the accompanying drawings.
FIG. 1 is a flow diagram of a method of 3D model rendering according to one embodiment of the present application. It should be noted that the 3D model rendering method according to the embodiment of the present application can be applied to the 3D model rendering apparatus according to the embodiment of the present application, and the apparatus can be configured on an electronic device. The electronic device may be various electronic devices with a display screen, may be a mobile terminal, such as a smart phone, a tablet computer, or may also be an AR device. The electronic device has a 3D model recognition device.
As shown in fig. 1, the 3D model rendering method may include:
and S110, determining the current posture needing to be rendered.
For example, when a 3D object is recognized by a recognition device in the electronic device, a model of the 3D object may be rendered, and when it is determined to render the 3D model, a pose currently required to be rendered may be determined.
And S120, determining a plurality of target fixed viewpoints from the calibrated fixed viewpoints according to the current posture to be rendered.
In the embodiment of the application, a viewpoint corresponding to a current gesture to be rendered can be determined according to a current gesture to be rendered, then a plurality of fixed viewpoints closest to the viewpoint corresponding to the current gesture to be rendered are determined from the plurality of calibrated fixed viewpoints, and then the plurality of closest fixed viewpoints are determined as a plurality of target fixed viewpoints.
Wherein the plurality of calibrated fixed viewpoints can be obtained in advance by: the sphere for the 3D model can be determined, wherein the sphere is a surface of a sphere which is formed by taking the 3D model as a center and taking a preset value as a radius, then a plurality of points are uniformly selected on the sphere, and the uniformly selected points are determined as a plurality of calibrated fixed viewpoints.
In an embodiment of the present application, after obtaining a plurality of calibrated fixed viewpoints, a visual patch of the 3D model at each calibrated fixed viewpoint may be obtained in advance. The specific implementation process can refer to the following embodiments.
And S130, acquiring a visual patch to be rendered of the 3D model under each target fixed viewpoint.
In the embodiment of the application, according to each target fixed viewpoint, a visual patch of the 3D model at each target fixed viewpoint may be obtained from visual patches of the 3D model at each calibrated fixed viewpoint, then the visual patches of the 3D model at each target fixed viewpoint are merged, and the merged visual patch is determined as a visual patch to be rendered of the 3D model at each target fixed viewpoint.
And S140, rendering the 3D model on a Central Processing Unit (CPU) according to the visual patch to be rendered of the 3D model at each target fixed viewpoint.
That is to say, the to-be-rendered visual patch of the 3D model at each target fixed viewpoint is obtained, and the 3D model can be rendered on the central processing unit CPU according to the to-be-rendered visual patch of the 3D model at each target fixed viewpoint.
According to the 3D model rendering method, the gesture needing to be rendered currently can be determined, then a plurality of target fixed viewpoints are determined from a plurality of calibrated fixed viewpoints according to the gesture needing to be rendered currently, then the visual patches to be rendered of the 3D model under each target fixed viewpoint are obtained, and finally the 3D model is rendered on a Central Processing Unit (CPU) according to the visual patches to be rendered of the 3D model under each target fixed viewpoint. The method can determine a plurality of target fixed viewpoints from a plurality of calibrated fixed viewpoints, and realize the rapid rendering of the 3D model on the CPU according to the visual surface patch to be rendered under each target fixed viewpoint, thereby avoiding the resource waste caused by the shielding of part of the surface patch by other surface patches during rendering, reducing the power consumption during the operation on hardware and accelerating the rendering speed; meanwhile, the CPU is adopted to render the 3D model, so that the technical problems of mobile phone heating, frequency reduction and the like caused by the fact that an AR application and a rendering engine (a model for rendering and displaying) work together in a GPU are solved, and the user experience is improved.
FIG. 2 is a flow diagram of a method for 3D model rendering according to a specific embodiment of the present application. As shown in fig. 2, the 3D model rendering method may include:
and S210, determining the current posture needing to be rendered.
For example, when a 3D object is recognized by a recognition device in the electronic device, a model of the 3D object may be rendered, and when it is determined to render the 3D model, a pose currently required to be rendered may be determined.
And S220, determining a viewpoint corresponding to the current gesture to be rendered according to the current gesture to be rendered.
For example, the 3D model may be rendered from a straight ahead angle, and the viewpoint corresponding to the straight ahead angle may be determined.
And S230, determining a plurality of fixed viewpoints which are closest to the viewpoint corresponding to the current gesture to be rendered from the plurality of calibrated fixed viewpoints.
In the embodiment of the present application, as shown in fig. 3, a plurality of calibrated fixed viewpoints may be obtained in advance by:
and S310, determining a spherical surface aiming at the 3D model, wherein the spherical surface is the surface of a sphere which is formed by taking the 3D model as the center and taking a preset value as the radius.
In order to be able to ensure that a point on the surface of the sphere can see the 3D object completely, in an embodiment of the present application, the surface of the sphere, which is formed with the 3D model as the center and the preset value as the radius, can be determined.
S320, uniformly selecting a plurality of points on the spherical surface, and determining the uniformly selected points as a plurality of calibrated fixed viewpoints.
Through a large number of experiments, in order to avoid a large or small view angle range formed by a plurality of target fixed viewpoints and improve the rendering effect when a plurality of target fixed viewpoints are determined subsequently, for example, 20 points can be uniformly selected on the surface of a sphere formed by taking a 3D model as a center and a preset value as a radius, and the selected 20 points are determined as a plurality of calibrated fixed viewpoints.
In the embodiment of the present application, after determining the plurality of calibrated fixed viewpoints, a plurality of fixed viewpoints closest to a viewpoint corresponding to a current gesture to be rendered may be determined from the plurality of calibrated fixed viewpoints.
In the embodiment of the application, a first connecting line between a viewpoint corresponding to a current gesture to be rendered and a central point of a 3D model can be determined, second connecting lines between each calibrated fixed viewpoint and the central point can be determined, then an included angle between the first connecting line and each second connecting line can be calculated, and then a plurality of fixed viewpoints closest to the viewpoint corresponding to the current gesture to be rendered can be determined from the plurality of calibrated fixed viewpoints according to the included angle between the first connecting line and each second connecting line.
It should be noted that, the smaller the included angle between the first connection line and the second connection line is, the closest the calibrated fixed viewpoint in the second connection line from the viewpoint corresponding to the current gesture to be rendered is.
For example, as shown in fig. 4, a viewpoint corresponding to a current gesture to be rendered is S1, a central point of the 3D model is Z1, each calibrated fixed viewpoint is G1, G2, G3, G4, and G5, a first connection line between S1 and Z1 may be determined, second connection lines between G1, G2, G3, G4, and G5 and Z1 are determined, an included angle between the first connection line and each second connection line is calculated, and then, according to the included angle between the first connection line and each second connection line, from a plurality of fixed viewpoints of G1, G2, G3, G4, and G5, 3 fixed viewpoints closest to S1 are determined, for example, 3 fixed viewpoints closest to S1 are G3, G4, and G5, respectively.
Optionally, in an embodiment of the present application, after obtaining a plurality of calibrated fixed viewpoints, a visual patch of the 3D model at each calibrated fixed viewpoint is also obtained in advance.
In the embodiment of the application, the 3D model can be rendered according to each calibrated fixed viewpoint in sequence, when the rendering of the 3D model is completed each time, the Z coordinate of each visible pixel in the image space stored in the Z cache region is obtained, and then the visible patch of the 3D model under each calibrated fixed viewpoint is obtained according to the Z coordinate of each visible pixel in the image space stored in the Z cache region after each rendering.
For example, after obtaining 20 calibrated fixed viewpoints, the 3D model may be rendered according to the 20 calibrated fixed viewpoints in sequence, and visible pixels under each calibrated fixed viewpoint are recorded, and the Z coordinate of each visible pixel is placed in the Z buffer. And when the rendering of the 3D model is completed each time, the Z coordinate of each visible pixel in the image space stored in the Z cache region can be obtained, and the visible surface patch of the 3D model under each calibrated fixed viewpoint can be obtained according to the Z coordinate of each visible pixel in the image space stored in the Z cache region after the rendering each time. That is, the Z-coordinate of each visible pixel of the 3D at each calibrated fixed viewpoint may be recorded into a corresponding set (denoted as Si, where i is 1.. 20), and the Z-coordinate recorded in each set is the visible patch at the corresponding calibrated fixed viewpoint.
And S240, determining the closest fixed viewpoints as a plurality of target fixed viewpoints.
That is, after determining a plurality of fixed viewpoints closest to a viewpoint corresponding to a current gesture to be rendered from among the plurality of calibrated fixed viewpoints, the plurality of closest fixed viewpoints may be determined as a plurality of target fixed viewpoints.
And S250, acquiring a visual patch to be rendered of the 3D model under each target fixed viewpoint.
In the embodiment of the application, according to each target fixed viewpoint, a visual patch of the 3D model under each target fixed viewpoint is obtained from visual patches of the 3D model under each calibrated fixed viewpoint, then the visual patches of the 3D model under each target fixed viewpoint are merged and de-duplicated, and the merged visual patch is determined as a visual patch to be rendered of the 3D model under each target fixed viewpoint.
And S260, rendering the 3D model on a Central Processing Unit (CPU) according to the visual patch to be rendered of the 3D model at each target fixed viewpoint.
That is to say, the to-be-rendered visual patch of the 3D model at each target fixed viewpoint is obtained, and the 3D model can be rendered on the central processing unit CPU according to the to-be-rendered visual patch of the 3D model at each target fixed viewpoint.
According to the 3D model rendering method, the current gesture needing to be rendered can be determined, the viewpoint corresponding to the current gesture to be rendered is determined according to the current gesture needing to be rendered, the multiple fixed viewpoints closest to the viewpoint corresponding to the current gesture to be rendered are determined from the multiple calibrated fixed viewpoints, the multiple closest fixed viewpoints are determined as the multiple target fixed viewpoints, then the visual patches to be rendered of the 3D model under each target fixed viewpoint are obtained, and then the 3D model is rendered on a Central Processing Unit (CPU) according to the visual patches to be rendered of the 3D model under each target fixed viewpoint. The method can determine a plurality of target fixed viewpoints from a plurality of calibrated fixed viewpoints, and realize the rapid rendering of the 3D model on the CPU according to the visual surface patch to be rendered under each target fixed viewpoint, the visual surface patch can be efficiently calculated, the resource waste caused by shielding of part of the surface patch by other surface patches during rendering is avoided, the power consumption during hardware operation is reduced, and the rendering speed is accelerated; meanwhile, the CPU is adopted to render the 3D model, so that the technical problems of mobile phone heating, frequency reduction and the like caused by the fact that an AR application and a rendering engine (a model for rendering and displaying) work together in a GPU are solved, and the user experience is improved.
FIG. 5 is a flow diagram of a method of 3D model rendering according to another embodiment of the present application. As shown in fig. 5, the 3D model rendering method may include:
and S510, determining a plurality of target fixed viewpoints from a plurality of calibrated fixed viewpoints according to the viewpoint corresponding to the current gesture to be rendered.
S520, obtaining the visual patch to be rendered of the 3D model under each target fixed viewpoint.
S530, rendering the 3D model according to the visual patch to be rendered of the 3D model under each target fixed viewpoint.
According to the 3D model rendering method, a plurality of target fixed viewpoints can be determined from a plurality of calibrated fixed viewpoints according to the viewpoint corresponding to the current gesture to be rendered, then the visual surface patch to be rendered of the 3D model under each target fixed viewpoint is obtained, and then the 3D model is rendered according to the visual surface patch to be rendered of the 3D model under each target fixed viewpoint. The method can efficiently calculate the visual patches, avoids resource waste caused by shielding of part of the patches by other patches during rendering, reduces power consumption during hardware operation, and accelerates rendering speed; meanwhile, the CPU is adopted to render the 3D model, so that the technical problems of mobile phone heating, frequency reduction and the like caused by the fact that an AR application and a rendering engine (a model for rendering and displaying) work together in a GPU are solved, and the user experience is improved.
Corresponding to the 3D model rendering methods provided in the foregoing several embodiments, an embodiment of the present application further provides a 3D model rendering device, and since the 3D model rendering device provided in the embodiment of the present application corresponds to the 3D model rendering methods provided in the foregoing several embodiments, the implementation manner of the 3D model rendering method is also applicable to the 3D model rendering device provided in the embodiment, and is not described in detail in the embodiment. Fig. 6 is a schematic structural diagram of a 3D model rendering apparatus according to an embodiment of the present application.
As shown in fig. 6, the 3D model rendering apparatus 600 includes: a pose to be rendered determination module 610, a target viewpoint determination module 620, a visual patch to be rendered acquisition module 630, and a rendering module 640. Wherein:
the gesture to be rendered determining module 610 is configured to determine a gesture that needs to be rendered currently.
The target viewpoint determining module 620 is configured to determine a plurality of target fixed viewpoints from the plurality of calibrated fixed viewpoints according to the current gesture to be rendered; as an example, the target viewpoint determining module is specifically configured to: determining a viewpoint corresponding to the current gesture to be rendered according to the current gesture to be rendered; determining a plurality of fixed viewpoints which are closest to the viewpoint corresponding to the current gesture to be rendered from the plurality of calibrated fixed viewpoints; determining the closest plurality of fixed viewpoints as the plurality of target fixed viewpoints.
In an embodiment of the present application, the target viewpoint determining module 620 is specifically configured to: determining a first connecting line between a viewpoint corresponding to the current gesture to be rendered and a central point of the 3D model; determining a second connection line between each calibrated fixed viewpoint and the central point; calculating included angles between the first connecting lines and the second connecting lines; and determining a plurality of fixed viewpoints which are closest to the viewpoint corresponding to the current gesture to be rendered from the plurality of calibrated fixed viewpoints according to the included angle between the first connecting line and each second connecting line.
The to-be-rendered visible patch obtaining module 630 is configured to obtain a to-be-rendered visible patch of the 3D model at each target fixed viewpoint; as an example, the to-be-rendered visible tile obtaining module 630 is specifically configured to: according to each target fixed viewpoint, acquiring a visible patch of the 3D model under each target fixed viewpoint from the visible patch of the 3D model under each calibrated fixed viewpoint; merging the visual patches of the 3D model under each target fixed viewpoint; and determining the merged visual patch as a visual patch to be rendered of the 3D model at each target fixed viewpoint.
The rendering module 640 is configured to render, on a central processing unit CPU, the 3D model according to the to-be-rendered visible patch of the 3D model at each target fixed viewpoint.
In an embodiment of the present application, as shown in fig. 7, the 3D model rendering apparatus further includes: a pre-calibration module 650, wherein the pre-calibration module 650 is configured to determine a spherical surface for the 3D model, where the spherical surface is a surface of a sphere that is centered on the 3D model and has a preset value as a radius; and uniformly selecting a plurality of points on the spherical surface, and determining the uniformly selected points as the calibrated fixed viewpoints.
In an embodiment of the present application, as shown in fig. 8, the 3D model rendering apparatus further includes: an obtaining module 660, where the obtaining module 660 is configured to obtain, in advance, a visual patch of the 3D model at each calibrated fixed viewpoint. As an example, the obtaining module 660 is specifically configured to: rendering the 3D model according to each calibrated fixed viewpoint in sequence; when the rendering of the 3D model is completed each time, acquiring a Z coordinate of each visible pixel in an image space stored in a Z cache region; and obtaining the visual patches of the 3D model under each calibrated fixed viewpoint according to the Z coordinate of each visual pixel in the image space stored in the Z cache region after each rendering.
According to the 3D model rendering device, the gesture required to be rendered currently can be determined, then a plurality of target fixed viewpoints are determined from a plurality of calibrated fixed viewpoints according to the gesture required to be rendered currently, then the visual patches to be rendered of the 3D model under each target fixed viewpoint are obtained, and finally the 3D model is rendered on a Central Processing Unit (CPU) according to the visual patches to be rendered of the 3D model under each target fixed viewpoint. Therefore, a plurality of target fixed viewpoints can be determined from the plurality of calibrated fixed viewpoints, and the 3D model can be quickly rendered on the CPU according to the visual surface patch to be rendered under each target fixed viewpoint, so that the resource waste caused by shielding of part of the surface patch by other surface patches during rendering is avoided, the power consumption during hardware operation is reduced, and the rendering speed is increased; meanwhile, the CPU is adopted to render the 3D model, so that the technical problems of mobile phone heating, frequency reduction and the like caused by the fact that an AR application and a rendering engine (a model for rendering and displaying) work together in a GPU are solved, and the user experience is improved.
According to an embodiment of the present application, an electronic device and a readable storage medium are also provided.
Fig. 9 is a block diagram of an electronic device according to a 3D model rendering method according to an embodiment of the present application. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the present application that are described and/or claimed herein.
As shown in fig. 9, the electronic apparatus includes: one or more processors 901, memory 902, and interfaces for connecting the various components, including a high-speed interface and a low-speed interface. The various components are interconnected using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions for execution within the electronic device, including instructions stored in or on the memory to display graphical information of a GUI on an external input/output apparatus (such as a display device coupled to the interface). In other embodiments, multiple processors and/or multiple buses may be used, along with multiple memories and multiple memories, as desired. Also, multiple electronic devices may be connected, with each device providing portions of the necessary operations (e.g., as a server array, a group of blade servers, or a multi-processor system). Fig. 9 illustrates an example of a processor 901.
The memory 902, which is a non-transitory computer readable storage medium, may be used to store non-transitory software programs, non-transitory computer executable programs, and modules, such as program instructions/modules corresponding to the 3D model rendering method in the embodiments of the present application (for example, the pose to be rendered determination module 610, the target viewpoint determination module 620, the view patch to be rendered acquisition module 630, and the rendering module 640 shown in fig. 6). The processor 901 executes various functional applications of the server and data processing, i.e., implements the 3D model rendering method in the above method embodiments, by executing non-transitory software programs, instructions, and modules stored in the memory 902.
The memory 902 may include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required for at least one function; the storage data area may store data created from use of the electronic device rendered according to the 3D model, and the like. Further, the memory 902 may include high speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, memory 902 may optionally include memory located remotely from processor 901, which may be connected to a 3D model rendering electronic device via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The electronic device of the 3D model rendering method may further include: an input device 903 and an output device 904. The processor 901, the memory 902, the input device 903 and the output device 904 may be connected by a bus or other means, and fig. 9 illustrates the connection by a bus as an example.
The input device 903 may receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic device rendered by the 3D model, such as an input device like a touch screen, a keypad, a mouse, a track pad, a touch pad, a pointer stick, one or more mouse buttons, a track ball, a joystick, etc. The output devices 904 may include a display device, auxiliary lighting devices (e.g., LEDs), tactile feedback devices (e.g., vibrating motors), and the like. The display device may include, but is not limited to, a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) display, and a plasma display. In some implementations, the display device can be a touch screen.
Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, application specific ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
These computer programs (also known as programs, software applications, or code) include machine instructions for a programmable processor, and may be implemented using high-level procedural and/or object-oriented programming languages, and/or assembly/machine languages. As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or device (e.g., magnetic discs, optical disks, memory, Programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
According to the technical scheme of the embodiment of the application, the gesture required to be rendered currently can be determined, then a plurality of target fixed viewpoints are determined from a plurality of calibrated fixed viewpoints according to the gesture required to be rendered currently, then the visual patches to be rendered of the 3D model under each target fixed viewpoint are obtained, and finally the 3D model is rendered on a Central Processing Unit (CPU) according to the visual patches to be rendered of the 3D model under each target fixed viewpoint. The method can determine a plurality of target fixed viewpoints from a plurality of calibrated fixed viewpoints, and realize the rapid rendering of the 3D model on the CPU according to the visual surface patch to be rendered under each target fixed viewpoint, thereby avoiding the resource waste caused by the shielding of part of the surface patch by other surface patches during rendering, reducing the power consumption during the operation on hardware and accelerating the rendering speed; meanwhile, the CPU is adopted to render the 3D model, so that the technical problems of mobile phone heating, frequency reduction and the like caused by the fact that an AR application and a rendering engine (a model for rendering and displaying) work together in a GPU are solved, and the user experience is improved.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present application may be executed in parallel, sequentially, or in different orders, and the present invention is not limited thereto as long as the desired results of the technical solutions disclosed in the present application can be achieved.
The above-described embodiments should not be construed as limiting the scope of the present application. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present application shall be included in the protection scope of the present application.
Claims (17)
1. A method for rendering a 3D model, comprising:
determining the current gesture needing to be rendered;
determining a plurality of target fixed viewpoints from a plurality of calibrated fixed viewpoints according to the current gesture to be rendered;
acquiring a visual patch to be rendered of the 3D model at each target fixed viewpoint;
and rendering the 3D model on a Central Processing Unit (CPU) according to the visual patch to be rendered of the 3D model under each target fixed viewpoint.
2. The method according to claim 1, wherein the plurality of calibrated fixed viewpoints are obtained in advance by:
determining a spherical surface aiming at the 3D model, wherein the spherical surface is the surface of a sphere which is formed by taking the 3D model as a center and taking a preset value as a radius;
and uniformly selecting a plurality of points on the spherical surface, and determining the uniformly selected points as the calibrated fixed viewpoints.
3. The method of claim 2, wherein after obtaining the plurality of calibrated fixed viewpoints, the method further comprises:
and obtaining a visual patch of the 3D model under each calibrated fixed viewpoint in advance.
4. The method of claim 3, wherein pre-obtaining a visual patch of the 3D model at each calibrated fixed viewpoint comprises:
rendering the 3D model according to each calibrated fixed viewpoint in sequence;
when the rendering of the 3D model is completed each time, acquiring a Z coordinate of each visible pixel in an image space stored in a Z cache region;
and obtaining the visual patches of the 3D model under each calibrated fixed viewpoint according to the Z coordinate of each visual pixel in the image space stored in the Z cache region after each rendering.
5. The method according to any one of claims 1 to 4, wherein obtaining a visual patch to be rendered of the 3D model at each target fixed viewpoint comprises:
according to each target fixed viewpoint, acquiring a visible patch of the 3D model under each target fixed viewpoint from the visible patch of the 3D model under each calibrated fixed viewpoint;
merging the visual patches of the 3D model under each target fixed viewpoint;
and determining the merged visual patch as a visual patch to be rendered of the 3D model at each target fixed viewpoint.
6. The method of any one of claims 1 to 4, wherein determining a plurality of target fixed viewpoints from a plurality of calibrated fixed viewpoints according to the current gesture to be rendered comprises:
determining a viewpoint corresponding to the current gesture to be rendered according to the current gesture to be rendered;
determining a plurality of fixed viewpoints which are closest to the viewpoint corresponding to the current gesture to be rendered from the plurality of calibrated fixed viewpoints;
determining the closest plurality of fixed viewpoints as the plurality of target fixed viewpoints.
7. The method of claim 6, wherein determining a plurality of fixed viewpoints closest to a viewpoint corresponding to the current gesture to be rendered from the plurality of calibrated fixed viewpoints comprises:
determining a first connecting line between a viewpoint corresponding to the current gesture to be rendered and a central point of the 3D model;
determining a second connection line between each calibrated fixed viewpoint and the central point;
calculating included angles between the first connecting lines and the second connecting lines;
and determining a plurality of fixed viewpoints which are closest to the viewpoint corresponding to the current gesture to be rendered from the plurality of calibrated fixed viewpoints according to the included angle between the first connecting line and each second connecting line.
8. A 3D model rendering apparatus, comprising:
the gesture to be rendered determining module is used for determining the current gesture to be rendered;
the target viewpoint determining module is used for determining a plurality of target fixed viewpoints from a plurality of calibrated fixed viewpoints according to the current gesture to be rendered;
a to-be-rendered visible patch obtaining module, configured to obtain a to-be-rendered visible patch of the 3D model at each target fixed viewpoint;
and the rendering module is used for rendering the 3D model on a Central Processing Unit (CPU) according to the visual patch to be rendered of the 3D model under each target fixed viewpoint.
9. The apparatus of claim 8, further comprising:
the pre-calibration module is used for determining a spherical surface aiming at the 3D model, wherein the spherical surface is the surface of a sphere which is formed by taking the 3D model as a center and taking a preset value as a radius; and uniformly selecting a plurality of points on the spherical surface, and determining the uniformly selected points as the calibrated fixed viewpoints.
10. The apparatus of claim 9, further comprising:
and the obtaining module is used for obtaining a visual patch of the 3D model under each calibrated fixed viewpoint in advance.
11. The apparatus of claim 10, wherein the obtaining module is specifically configured to:
rendering the 3D model according to each calibrated fixed viewpoint in sequence;
when the rendering of the 3D model is completed each time, acquiring a Z coordinate of each visible pixel in an image space stored in a Z cache region;
and obtaining the visual patches of the 3D model under each calibrated fixed viewpoint according to the Z coordinate of each visual pixel in the image space stored in the Z cache region after each rendering.
12. The apparatus according to any one of claims 8 to 11, wherein the to-be-rendered visible tile obtaining module is specifically configured to:
according to each target fixed viewpoint, acquiring a visible patch of the 3D model under each target fixed viewpoint from the visible patch of the 3D model under each calibrated fixed viewpoint;
merging the visual patches of the 3D model under each target fixed viewpoint;
and determining the merged visual patch as a visual patch to be rendered of the 3D model at each target fixed viewpoint.
13. The apparatus according to any of claims 8 to 11, wherein the target viewpoint determining module is specifically configured to:
determining a viewpoint corresponding to the current gesture to be rendered according to the current gesture to be rendered;
determining a plurality of fixed viewpoints which are closest to the viewpoint corresponding to the current gesture to be rendered from the plurality of calibrated fixed viewpoints;
determining the closest plurality of fixed viewpoints as the plurality of target fixed viewpoints.
14. The apparatus of claim 13, wherein the target viewpoint determining module is specifically configured to:
determining a first connecting line between a viewpoint corresponding to the current gesture to be rendered and a central point of the 3D model;
determining a second connection line between each calibrated fixed viewpoint and the central point;
calculating included angles between the first connecting lines and the second connecting lines;
and determining a plurality of fixed viewpoints which are closest to the viewpoint corresponding to the current gesture to be rendered from the plurality of calibrated fixed viewpoints according to the included angle between the first connecting line and each second connecting line.
15. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the 3D model rendering method of any one of claims 1 to 7.
16. A non-transitory computer readable storage medium storing computer instructions for causing a computer to perform the 3D model rendering method of any one of claims 1 to 7.
17. A method for rendering a 3D model, comprising:
determining a plurality of target fixed viewpoints from a plurality of calibrated fixed viewpoints according to the viewpoint corresponding to the current gesture to be rendered;
acquiring a visual patch to be rendered of the 3D model at each target fixed viewpoint;
and rendering the 3D model according to the visual patch to be rendered of the 3D model under each target fixed viewpoint.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010118219.7A CN111275803B (en) | 2020-02-25 | 2020-02-25 | 3D model rendering method, device, equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010118219.7A CN111275803B (en) | 2020-02-25 | 2020-02-25 | 3D model rendering method, device, equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111275803A true CN111275803A (en) | 2020-06-12 |
CN111275803B CN111275803B (en) | 2023-06-02 |
Family
ID=71000381
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010118219.7A Active CN111275803B (en) | 2020-02-25 | 2020-02-25 | 3D model rendering method, device, equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111275803B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112489203A (en) * | 2020-12-08 | 2021-03-12 | 网易(杭州)网络有限公司 | Model processing method, model processing apparatus, electronic device, and storage medium |
CN112562065A (en) * | 2020-12-17 | 2021-03-26 | 深圳市大富网络技术有限公司 | Rendering method, system and device of virtual object in virtual world |
WO2022063260A1 (en) * | 2020-09-25 | 2022-03-31 | 华为云计算技术有限公司 | Rendering method and apparatus, and device |
Citations (24)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2002032744A (en) * | 2000-07-14 | 2002-01-31 | Komatsu Ltd | Device and method for three-dimensional modeling and three-dimensional image generation |
US20040240741A1 (en) * | 2003-05-30 | 2004-12-02 | Aliaga Daniel G. | Method and system for creating interactive walkthroughs of real-world environment from set of densely captured images |
JP2005063041A (en) * | 2003-08-08 | 2005-03-10 | Olympus Corp | Three-dimensional modeling apparatus, method, and program |
CN1885155A (en) * | 2005-06-20 | 2006-12-27 | 钟明 | Digital ball-screen cinema making method |
JP2007200307A (en) * | 2005-12-27 | 2007-08-09 | Namco Bandai Games Inc | Image generating device, program and information storage medium |
US20080074312A1 (en) * | 2006-08-31 | 2008-03-27 | Jack Cross | System and method for 3d radar image rendering |
CN101458824A (en) * | 2009-01-08 | 2009-06-17 | 浙江大学 | Hologram irradiation rendering method based on web |
CN101529924A (en) * | 2006-10-02 | 2009-09-09 | 株式会社东芝 | Method, apparatus, and computer program product for generating stereoscopic image |
CN103093491A (en) * | 2013-01-18 | 2013-05-08 | 浙江大学 | Three-dimensional model high sense of reality virtuality and reality combination rendering method based on multi-view video |
US20140368495A1 (en) * | 2013-06-18 | 2014-12-18 | Institute For Information Industry | Method and system for displaying multi-viewpoint images and non-transitory computer readable storage medium thereof |
CN104376552A (en) * | 2014-09-19 | 2015-02-25 | 四川大学 | Virtual-real registering algorithm of 3D model and two-dimensional image |
US20150279120A1 (en) * | 2014-03-28 | 2015-10-01 | Fujifilm Corporation | Three dimensional orientation configuration apparatus, method and non-transitory computer readable medium |
EP3065109A2 (en) * | 2015-03-06 | 2016-09-07 | Sony Interactive Entertainment Inc. | System, device and method of 3d printing |
US20160260244A1 (en) * | 2015-03-05 | 2016-09-08 | Seiko Epson Corporation | Three-dimensional image processing apparatus and three-dimensional image processing system |
US9619105B1 (en) * | 2014-01-30 | 2017-04-11 | Aquifi, Inc. | Systems and methods for gesture based interaction with viewpoint dependent user interfaces |
WO2017113731A1 (en) * | 2015-12-28 | 2017-07-06 | 乐视控股(北京)有限公司 | 360-degree panoramic displaying method and displaying module, and mobile terminal |
CN107333121A (en) * | 2017-06-27 | 2017-11-07 | 山东大学 | The immersion solid of moving view point renders optical projection system and its method on curve screens |
CN107481312A (en) * | 2016-06-08 | 2017-12-15 | 腾讯科技(深圳)有限公司 | A kind of image rendering and device based on volume drawing |
CN108154548A (en) * | 2017-12-06 | 2018-06-12 | 北京像素软件科技股份有限公司 | Image rendering method and device |
CN108920233A (en) * | 2018-06-22 | 2018-11-30 | 百度在线网络技术(北京)有限公司 | Panorama method for page jump, equipment and storage medium |
CN108961371A (en) * | 2017-05-19 | 2018-12-07 | 传线网络科技(上海)有限公司 | Panorama starts page and APP display methods, processing unit and mobile terminal |
CN109242943A (en) * | 2018-08-21 | 2019-01-18 | 腾讯科技(深圳)有限公司 | A kind of image rendering method, device and image processing equipment, storage medium |
US20190139297A1 (en) * | 2017-11-07 | 2019-05-09 | Microsoft Technology Licensing, Llc | 3d skeletonization using truncated epipolar lines |
CN110163943A (en) * | 2018-11-21 | 2019-08-23 | 深圳市腾讯信息技术有限公司 | The rendering method and device of image, storage medium, electronic device |
-
2020
- 2020-02-25 CN CN202010118219.7A patent/CN111275803B/en active Active
Patent Citations (24)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2002032744A (en) * | 2000-07-14 | 2002-01-31 | Komatsu Ltd | Device and method for three-dimensional modeling and three-dimensional image generation |
US20040240741A1 (en) * | 2003-05-30 | 2004-12-02 | Aliaga Daniel G. | Method and system for creating interactive walkthroughs of real-world environment from set of densely captured images |
JP2005063041A (en) * | 2003-08-08 | 2005-03-10 | Olympus Corp | Three-dimensional modeling apparatus, method, and program |
CN1885155A (en) * | 2005-06-20 | 2006-12-27 | 钟明 | Digital ball-screen cinema making method |
JP2007200307A (en) * | 2005-12-27 | 2007-08-09 | Namco Bandai Games Inc | Image generating device, program and information storage medium |
US20080074312A1 (en) * | 2006-08-31 | 2008-03-27 | Jack Cross | System and method for 3d radar image rendering |
CN101529924A (en) * | 2006-10-02 | 2009-09-09 | 株式会社东芝 | Method, apparatus, and computer program product for generating stereoscopic image |
CN101458824A (en) * | 2009-01-08 | 2009-06-17 | 浙江大学 | Hologram irradiation rendering method based on web |
CN103093491A (en) * | 2013-01-18 | 2013-05-08 | 浙江大学 | Three-dimensional model high sense of reality virtuality and reality combination rendering method based on multi-view video |
US20140368495A1 (en) * | 2013-06-18 | 2014-12-18 | Institute For Information Industry | Method and system for displaying multi-viewpoint images and non-transitory computer readable storage medium thereof |
US9619105B1 (en) * | 2014-01-30 | 2017-04-11 | Aquifi, Inc. | Systems and methods for gesture based interaction with viewpoint dependent user interfaces |
US20150279120A1 (en) * | 2014-03-28 | 2015-10-01 | Fujifilm Corporation | Three dimensional orientation configuration apparatus, method and non-transitory computer readable medium |
CN104376552A (en) * | 2014-09-19 | 2015-02-25 | 四川大学 | Virtual-real registering algorithm of 3D model and two-dimensional image |
US20160260244A1 (en) * | 2015-03-05 | 2016-09-08 | Seiko Epson Corporation | Three-dimensional image processing apparatus and three-dimensional image processing system |
EP3065109A2 (en) * | 2015-03-06 | 2016-09-07 | Sony Interactive Entertainment Inc. | System, device and method of 3d printing |
WO2017113731A1 (en) * | 2015-12-28 | 2017-07-06 | 乐视控股(北京)有限公司 | 360-degree panoramic displaying method and displaying module, and mobile terminal |
CN107481312A (en) * | 2016-06-08 | 2017-12-15 | 腾讯科技(深圳)有限公司 | A kind of image rendering and device based on volume drawing |
CN108961371A (en) * | 2017-05-19 | 2018-12-07 | 传线网络科技(上海)有限公司 | Panorama starts page and APP display methods, processing unit and mobile terminal |
CN107333121A (en) * | 2017-06-27 | 2017-11-07 | 山东大学 | The immersion solid of moving view point renders optical projection system and its method on curve screens |
US20190139297A1 (en) * | 2017-11-07 | 2019-05-09 | Microsoft Technology Licensing, Llc | 3d skeletonization using truncated epipolar lines |
CN108154548A (en) * | 2017-12-06 | 2018-06-12 | 北京像素软件科技股份有限公司 | Image rendering method and device |
CN108920233A (en) * | 2018-06-22 | 2018-11-30 | 百度在线网络技术(北京)有限公司 | Panorama method for page jump, equipment and storage medium |
CN109242943A (en) * | 2018-08-21 | 2019-01-18 | 腾讯科技(深圳)有限公司 | A kind of image rendering method, device and image processing equipment, storage medium |
CN110163943A (en) * | 2018-11-21 | 2019-08-23 | 深圳市腾讯信息技术有限公司 | The rendering method and device of image, storage medium, electronic device |
Non-Patent Citations (6)
Title |
---|
JAYARAM K. UDUPA 等: "《Surface and Volume Rendering in Three-Dimensional Imaging: A Comparison》", 《JOURNAL OF DIGITAL IMAGING》 * |
MARC ALEXA 等: "《Computing and Rendering Point Set Surfaces》", 《IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS》 * |
刘小聪等: "基于光线投射的全GPU实现的地形渲染算法", 《计算机仿真》 * |
刘志等: "基于特征线条的三维模型检索方法", 《计算机辅助设计与图形学学报》 * |
占伟伟等: "自主可控环境下三维海量态势显示优化方法", 《指挥信息系统与技术》 * |
曹庆豪: "《R肘关节康复评估与训练实时渲染技术的研究》", 《中国优秀硕士学位论文全文数据库信息科技辑》 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2022063260A1 (en) * | 2020-09-25 | 2022-03-31 | 华为云计算技术有限公司 | Rendering method and apparatus, and device |
CN112489203A (en) * | 2020-12-08 | 2021-03-12 | 网易(杭州)网络有限公司 | Model processing method, model processing apparatus, electronic device, and storage medium |
CN112489203B (en) * | 2020-12-08 | 2024-06-04 | 网易(杭州)网络有限公司 | Model processing method, model processing device, electronic equipment and storage medium |
CN112562065A (en) * | 2020-12-17 | 2021-03-26 | 深圳市大富网络技术有限公司 | Rendering method, system and device of virtual object in virtual world |
Also Published As
Publication number | Publication date |
---|---|
CN111275803B (en) | 2023-06-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111722245B (en) | Positioning method, positioning device and electronic equipment | |
JP2021061041A (en) | Foveated geometry tessellation | |
CN111275803B (en) | 3D model rendering method, device, equipment and storage medium | |
CN111860167B (en) | Face fusion model acquisition method, face fusion model acquisition device and storage medium | |
US11074437B2 (en) | Method, apparatus, electronic device and storage medium for expression driving | |
CN110989878B (en) | Animation display method and device in applet, electronic equipment and storage medium | |
CN112270669B (en) | Human body 3D key point detection method, model training method and related devices | |
CN107408011B (en) | Dynamically merging multiple screens into one viewport | |
CN110443230A (en) | Face fusion method, apparatus and electronic equipment | |
US11120617B2 (en) | Method and apparatus for switching panoramic scene | |
EP4231242A1 (en) | Graphics rendering method and related device thereof | |
CN111291218B (en) | Video fusion method, device, electronic equipment and readable storage medium | |
CN115482325B (en) | Picture rendering method, device, system, equipment and medium | |
CN113870399B (en) | Expression driving method and device, electronic equipment and storage medium | |
CN111949816B (en) | Positioning processing method, device, electronic equipment and storage medium | |
US10675538B2 (en) | Program, electronic device, system, and method for determining resource allocation for executing rendering while predicting player's intent | |
CN111275827A (en) | Edge-based augmented reality three-dimensional tracking registration method and device and electronic equipment | |
CN110631603B (en) | Vehicle navigation method and device | |
CN110727383A (en) | Touch interaction method and device based on small program, electronic equipment and storage medium | |
CN111915642A (en) | Image sample generation method, device, equipment and readable storage medium | |
CN111369571A (en) | Three-dimensional object pose accuracy judgment method and device and electronic equipment | |
CN111898489B (en) | Method and device for marking palm pose, electronic equipment and storage medium | |
CN114677469A (en) | Method and device for rendering target image, electronic equipment and storage medium | |
CN112150380A (en) | Method and device for correcting image, electronic equipment and readable storage medium | |
CN113129457B (en) | Texture generation method, device, equipment and medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |