CN111275803B - 3D model rendering method, device, equipment and storage medium - Google Patents

3D model rendering method, device, equipment and storage medium Download PDF

Info

Publication number
CN111275803B
CN111275803B CN202010118219.7A CN202010118219A CN111275803B CN 111275803 B CN111275803 B CN 111275803B CN 202010118219 A CN202010118219 A CN 202010118219A CN 111275803 B CN111275803 B CN 111275803B
Authority
CN
China
Prior art keywords
model
rendered
viewpoint
fixed
viewpoints
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010118219.7A
Other languages
Chinese (zh)
Other versions
CN111275803A (en
Inventor
陈思利
刘赵梁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202010118219.7A priority Critical patent/CN111275803B/en
Publication of CN111275803A publication Critical patent/CN111275803A/en
Application granted granted Critical
Publication of CN111275803B publication Critical patent/CN111275803B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering

Abstract

The application discloses a 3D model rendering method, a device, electronic equipment and a storage medium. The specific implementation scheme is as follows: determining the current gesture to be rendered; determining a plurality of target fixed viewpoints from a plurality of calibrated fixed viewpoints according to the current gesture to be rendered; obtaining a visual surface patch to be rendered of the 3D model under each target fixed viewpoint; and rendering the 3D model according to the visual patches to be rendered of the 3D model at each target fixed viewpoint on a CPU. Therefore, the to-be-rendered visual surface patches under each target fixed viewpoint can be obtained, the 3D model is rapidly rendered on the CPU, the resource waste caused by the fact that part of surface patches are shielded by other surface patches during rendering is avoided, the power consumption during running on hardware is reduced, and the rendering speed is accelerated.

Description

3D model rendering method, device, equipment and storage medium
Technical Field
The present disclosure relates to the field of rendering technologies, and in particular, to a 3D model rendering method, apparatus, electronic device, and non-transitory computer readable storage medium storing computer instructions.
Background
3D rendering is the field of rendering computer drawings of 3D models. 3D rendering is used in a variety of application fields including virtual reality, animation, and movies. Among them, the 3D object tracking algorithm is an important algorithm in the augmented reality (Augm ented Reality, english is abbreviated as AR) technology product, where in a typical 3D object tracking algorithm, a model (such as obj format) of a tracked 3D object needs to be rendered from time to correct the pose of the 3D object tracking algorithm.
In the related art, there are two general methods for rendering a 3D model, one is to render the 3D model using a GPU (Graphics Processing Unit, graphics processor), and the other is to render the 3D model using a CPU (Centra lProcessing Unit ). When a GPU is adopted to render a 3D model, in an AR product, when a 3D object is tracked, materials (such as a 3D description, which can be understood as rendering some description guide animations according to the gesture of the 3D object on a mobile phone screen) such as a model for display and an animation need to be synchronously rendered on a display device (such as the mobile phone screen), if the 3D model required by the GPU rendering tracking algorithm is frequently used, the 3D model used by the rendering algorithm works together with the model used by the rendering display, which can cause technical problems of heating, frequency reduction and the like of the mobile phone, resulting in poor user experience. When the CPU is used for rendering the 3D model, the CPU is a serial operation unit, so that the 3D model is rendered slowly when the number of the rendering patches is large; and because of different modeling of the 3D model, the normal vector of part of the patches points to the half space where the view point is located, the possibility that the normal vector is blocked by other patches still exists, and the part of the patches causes resource waste when the 3D model is rendered.
Disclosure of Invention
The object of the present application is to solve at least one of the technical problems in the related art to some extent.
Therefore, a first object of the present application is to provide a 3D model rendering method, which can achieve fast rendering of a 3D model on a CPU by obtaining a visual patch to be rendered at a fixed viewpoint of each target, thereby avoiding resource waste caused by shielding of a part of patches by other patches during rendering, reducing power consumption during running on hardware, and accelerating rendering speed.
A second object of the present application is to propose a 3D model rendering device.
A third object of the present application is to propose an electronic device.
A fourth object of the present application is to propose a non-transitory computer readable storage medium storing computer instructions.
A fifth object of the present application is to propose another 3D model rendering method.
To achieve the above object, a 3D model rendering method according to an embodiment of a first aspect of the present application includes:
determining the current gesture to be rendered;
determining a plurality of target fixed viewpoints from a plurality of calibrated fixed viewpoints according to the current gesture to be rendered;
obtaining a visual surface patch to be rendered of the 3D model under each target fixed viewpoint;
And rendering the 3D model according to the visual patches to be rendered of the 3D model under each target fixed viewpoint on a CPU.
According to one embodiment of the application, the plurality of calibrated fixed viewpoints are obtained in advance by: determining a spherical surface aiming at the 3D model, wherein the spherical surface is a spherical surface formed by taking the 3D model as a center and taking a preset value as a radius; and uniformly selecting a plurality of points on the spherical surface, and determining the uniformly selected points as the calibrated fixed viewpoints.
According to one embodiment of the present application, after obtaining the plurality of calibrated fixed viewpoints, the method further comprises: and obtaining the visual surface patch of the 3D model under each calibrated fixed viewpoint in advance.
According to one embodiment of the present application, obtaining in advance a visual patch of the 3D model at each calibrated fixed viewpoint includes: rendering the 3D model according to each calibrated fixed viewpoint in turn; when the rendering of the 3D model is completed each time, acquiring the Z coordinate of each visible pixel of the image space stored in the Z cache region; and obtaining the visual surface patch of the 3D model under each calibrated fixed viewpoint according to the Z coordinate of each visual pixel of the image space stored by the Z buffer after each rendering.
According to one embodiment of the application, obtaining the visual patch to be rendered of the 3D model at each target fixed viewpoint includes: according to each target fixed viewpoint, obtaining a visual surface patch of the 3D model at each target fixed viewpoint from the visual surface patches of the 3D model at each calibrated fixed viewpoint; merging the visual patches of the 3D model under each target fixed viewpoint; and determining the combined visual patches as visual patches to be rendered of the 3D model under each target fixed viewpoint.
According to one embodiment of the present application, determining a plurality of target fixed viewpoints from a plurality of calibrated fixed viewpoints according to the gesture currently required to be rendered includes: determining a viewpoint corresponding to the current gesture to be rendered according to the gesture to be rendered currently; determining a plurality of fixed viewpoints closest to the viewpoint corresponding to the current gesture to be rendered from the plurality of calibrated fixed viewpoints; the closest plurality of fixed viewpoints is determined as the plurality of target fixed viewpoints.
According to an embodiment of the present application, determining, from the plurality of calibrated fixed viewpoints, a plurality of fixed viewpoints closest to a viewpoint corresponding to the current pose to be rendered, includes: determining a first connecting line between a viewpoint corresponding to the current gesture to be rendered and a center point of the 3D model; determining a second connecting line between each calibrated fixed viewpoint and the central point; calculating the included angle between the first connecting line and each second connecting line; and determining a plurality of fixed viewpoints closest to the viewpoint corresponding to the current gesture to be rendered from the plurality of calibrated fixed viewpoints according to the included angles between the first connecting line and each second connecting line.
To achieve the above object, a 3D model rendering device according to an embodiment of a second aspect of the present application includes:
the determining module of the gesture to be rendered is used for determining the gesture to be rendered currently;
the target viewpoint determining module is used for determining a plurality of target fixed viewpoints from a plurality of calibrated fixed viewpoints according to the current gesture to be rendered;
the visual surface patch to be rendered is used for obtaining the visual surface patch to be rendered of the 3D model under each target fixed viewpoint;
and the rendering module is used for rendering the 3D model on a CPU according to the visual surface patch to be rendered of the 3D model under each target fixed viewpoint.
To achieve the above object, an electronic device according to an embodiment of a third aspect of the present application includes: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the 3D model rendering method of the first aspect of the present application.
To achieve the above object, a non-transitory computer readable storage medium storing computer instructions according to an embodiment of a fourth aspect of the present application includes: the computer instructions are configured to cause the computer to perform the 3D model rendering method according to the first aspect of the present application.
In order to achieve the above object, a 3D model rendering method according to a fifth convenient embodiment of the present application includes: determining a plurality of target fixed viewpoints from a plurality of calibrated fixed viewpoints according to viewpoints corresponding to the current gesture to be rendered; obtaining a visual surface patch to be rendered of the 3D model under each target fixed viewpoint; and rendering the 3D model according to the visual patches to be rendered of the 3D model under each target fixed viewpoint.
One embodiment of the above application has the following advantages or benefits: the method has the advantages that the multiple target fixed viewpoints can be determined from the multiple calibrated fixed viewpoints, and the 3D model can be quickly rendered on the CPU according to the visual patches to be rendered under each target fixed viewpoint, so that resource waste caused by shielding of part of patches by other patches during rendering is avoided, power consumption during running on hardware is reduced, and rendering speed is increased; meanwhile, the CPU is adopted to render the 3D model, so that the technical problems of mobile phone heating, frequency reduction and the like caused by the operation of the AR application and a rendering engine (used for rendering the displayed model) together in the GPU are avoided, and the user experience is improved. Because a plurality of target fixed viewpoints are determined from a plurality of calibrated fixed viewpoints, then the visual surface patch to be rendered of the 3D model under each target fixed viewpoint is obtained, and on a Central Processing Unit (CPU), the 3D model is rendered according to the visual surface patch to be rendered of the 3D model under each target fixed viewpoint, the problems of resource waste caused by shielding of part of surface patches by other surface patches, higher power consumption during hardware operation, low rendering speed and the like in the related art are solved; meanwhile, a CPU is adopted to render the 3D model, so that the technical problems of mobile phone heating, frequency reduction and the like caused by the operation of the AR application and a rendering engine (used for rendering a displayed model) together in a GPU are avoided. The method and the device achieve rapid rendering of the 3D model on the CPU, avoid resource waste caused by shielding of part of the patches by other patches during rendering, reduce power consumption during operation on hardware, and accelerate rendering speed; meanwhile, the CPU is adopted to render the 3D model, so that the technical problems of mobile phone heating, frequency reduction and the like caused by the operation of the AR application and a rendering engine (used for rendering the displayed model) together in the GPU are avoided, and the technical effect of user experience is improved.
Other effects of the above alternative will be described below in connection with specific embodiments.
Drawings
The drawings are for better understanding of the present solution and do not constitute a limitation of the present application. Wherein:
FIG. 1 is a flow chart of a 3D model rendering method according to one embodiment of the present application.
FIG. 2 is a flow chart of a 3D model rendering method according to one specific embodiment of the present application.
FIG. 3 is a flow chart of a manner in which a plurality of calibrated fixed view points are obtained according to one embodiment of the present application.
FIG. 4 is a schematic diagram of determining a plurality of fixed viewpoints closest to a viewpoint corresponding to a current pose to be rendered, according to one embodiment of the present application.
Fig. 5 is a flow chart of a 3D model rendering method according to another embodiment of the present application.
Fig. 6 is a schematic structural view of a 3D model rendering apparatus according to an embodiment of the present application.
Fig. 7 is a schematic structural view of a 3D model rendering apparatus according to another embodiment of the present application.
Fig. 8 is a schematic structural view of a 3D model rendering apparatus according to still another embodiment of the present application.
Fig. 9 is a schematic structural view of an electronic device according to an embodiment of the present application.
Detailed Description
Exemplary embodiments of the present application are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present application to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
The application provides a 3D model rendering method, a device, electronic equipment and a non-transient computer readable storage medium storing computer instructions, and solves the problems of resource waste caused by shielding of part of patches by other patches, higher power consumption during hardware operation, low rendering speed and the like in the related art; meanwhile, a CPU is adopted to render the 3D model, so that the technical problems of mobile phone heating, frequency reduction and the like caused by the operation of the AR application and a rendering engine (used for rendering a displayed model) together in a GPU are avoided. In particular, a 3D model rendering method, apparatus, electronic device, and non-transitory computer-readable storage medium storing computer instructions of embodiments of the present application are described below with reference to the accompanying drawings.
FIG. 1 is a flow chart of a 3D model rendering method according to one embodiment of the present application. It should be noted that the 3D model rendering method of the embodiment of the present application may be applied to the 3D model rendering apparatus of the embodiment of the present application, and the apparatus may be configured on an electronic device. The electronic device may be various electronic devices with a display screen, may be a mobile terminal, for example, a smart phone, a tablet computer, or may also be an AR device. The electronic device has a 3D model recognition device.
As shown in fig. 1, the 3D model rendering method may include:
s110, determining the gesture which is needed to be rendered currently.
For example, when the recognition device in the electronic device recognizes a 3D object, a model of the 3D object may be rendered, and when determining to render the 3D model, a pose that is currently required to be rendered may be determined first.
S120, determining a plurality of target fixed viewpoints from a plurality of calibrated fixed viewpoints according to the current gesture to be rendered.
In the embodiment of the application, according to the gesture to be rendered currently, the viewpoint corresponding to the gesture to be rendered currently is determined, then, from a plurality of calibrated fixed viewpoints, a plurality of fixed viewpoints closest to the viewpoint corresponding to the gesture to be rendered currently are determined, and then, the closest plurality of fixed viewpoints are determined to be a plurality of target fixed viewpoints.
Wherein the plurality of calibrated fixed viewpoints may be obtained in advance by: a sphere for the 3D model may be determined, wherein the sphere is a surface of a sphere having the 3D model as a center and a preset value as a radius, and then a plurality of points are uniformly selected on the sphere, and the uniformly selected points are determined as a plurality of calibrated fixed viewpoints.
In embodiments of the present application, after obtaining a plurality of calibrated fixed viewpoints, a visual patch of the 3D model at each calibrated fixed viewpoint may be obtained in advance. Reference is made to the following embodiments for specific implementation.
S130, obtaining the visual surface patch to be rendered of the 3D model under each target fixed viewpoint.
In the embodiment of the application, according to each target fixed viewpoint, from the visual patches of the 3D model under each calibrated fixed viewpoint, the visual patches of the 3D model under each target fixed viewpoint are acquired, then the visual patches of the 3D model under each target fixed viewpoint are combined, and the combined visual patches are determined to be the visual patches to be rendered of the 3D model under each target fixed viewpoint.
And S140, on a CPU, rendering the 3D model according to the visual patches to be rendered of the 3D model under each target fixed viewpoint.
That is, the visual surface patch to be rendered of the 3D model at each target fixed viewpoint is obtained, and the 3D model can be rendered on the central processing unit CPU according to the visual surface patch to be rendered of the 3D model at each target fixed viewpoint.
According to the 3D model rendering method, the current gesture required to be rendered can be determined, then a plurality of target fixed viewpoints are determined from a plurality of calibrated fixed viewpoints according to the current gesture required to be rendered, then a visual surface patch to be rendered of the 3D model under each target fixed viewpoint is obtained, and finally the 3D model is rendered on a Central Processing Unit (CPU) according to the visual surface patch to be rendered of the 3D model under each target fixed viewpoint. According to the method, the multiple target fixed viewpoints can be determined from the multiple calibrated fixed viewpoints, and the 3D model is quickly rendered on the CPU according to the visual patches to be rendered under each target fixed viewpoint, so that resource waste caused by shielding of part of patches by other patches during rendering is avoided, power consumption during running on hardware is reduced, and rendering speed is increased; meanwhile, the CPU is adopted to render the 3D model, so that the technical problems of mobile phone heating, frequency reduction and the like caused by the operation of the AR application and a rendering engine (used for rendering the displayed model) together in the GPU are avoided, and the user experience is improved.
FIG. 2 is a flow chart of a 3D model rendering method according to one specific embodiment of the present application. As shown in fig. 2, the 3D model rendering method may include:
s210, determining the current gesture to be rendered.
For example, when the recognition device in the electronic device recognizes a 3D object, a model of the 3D object may be rendered, and when determining to render the 3D model, a pose that is currently required to be rendered may be determined first.
S220, determining a viewpoint corresponding to the current gesture to be rendered according to the gesture to be rendered.
For example, the 3D model may be rendered from a straight ahead angle, and thus a viewpoint corresponding to the straight ahead angle may be determined.
S230, determining a plurality of fixed viewpoints closest to the viewpoint corresponding to the current gesture to be rendered from the plurality of calibrated fixed viewpoints.
In the embodiment of the present application, as shown in fig. 3, a plurality of calibrated fixed viewpoints may be obtained in advance by:
s310, determining a spherical surface for the 3D model, wherein the spherical surface is a spherical surface with the 3D model as the center and a preset value as a radius.
In order to be able to ensure that a point on the surface of a sphere is able to see the 3D object completely, in an embodiment of the present application, the surface of a sphere that is centered on the 3D model, with a preset value being the radius, can be determined.
S320, uniformly selecting a plurality of points on the spherical surface, and determining the uniformly selected points as a plurality of calibrated fixed viewpoints.
It is known through a large number of experiments that in order to avoid a large or small view angle range formed by a plurality of target fixed view points when determining a plurality of target fixed view points later, and improve the rendering effect, for example, 20 points can be uniformly selected on the surface of a sphere formed by taking a 3D model as a center and a preset value as a radius, and the selected 20 points are determined as a plurality of calibrated fixed view points.
In the embodiment of the present application, after determining a plurality of calibrated fixed viewpoints, a plurality of fixed viewpoints closest to the viewpoint corresponding to the current pose to be rendered may be determined from the plurality of calibrated fixed viewpoints.
In an embodiment of the present application, a first line between a viewpoint corresponding to a current pose to be rendered and a center point of a 3D model may be determined, second lines between each calibrated fixed viewpoint and the center point may be determined, then an included angle between the first line and each second line may be calculated, and then, from a plurality of calibrated fixed viewpoints, a plurality of fixed viewpoints closest to the viewpoint corresponding to the current pose to be rendered may be determined according to the included angle between the first line and each second line.
It should be noted that, the smaller the included angle between the first connection line and the second connection line, the closest the calibrated fixed viewpoint in the second connection line to the viewpoint corresponding to the current gesture to be rendered.
For example, as shown in fig. 4, the viewpoint corresponding to the current pose to be rendered is S1, the center point of the 3D model is Z1, each calibrated fixed viewpoint is G1, G2, G3, G4, and G5, a first connection line between S1 and Z1 may be determined, a second connection line between G1, G2, G3, G4, and G5 and Z1 may be determined, then an included angle between the first connection line and each second connection line is calculated, and then, from among the multiple fixed viewpoints of G1, G2, G3, G4, and G5, 3 fixed viewpoints closest to S1, for example, the 3 fixed viewpoints closest to S1 are G3, G4, and G5, respectively, are determined according to the included angle between the first connection line and each second connection line.
Optionally, in an embodiment of the present application, after obtaining a plurality of calibrated fixed viewpoints, a visual patch of the 3D model at each calibrated fixed viewpoint is also obtained in advance.
In the embodiment of the application, the 3D model may be rendered according to each calibrated fixed viewpoint sequentially, when the rendering of the 3D model is completed each time, the Z coordinate of each visible pixel of the image space stored in the Z buffer is obtained, and then the visual patch of the 3D model at each calibrated fixed viewpoint is obtained according to the Z coordinate of each visible pixel of the image space stored in the Z buffer after each rendering.
For example, after obtaining 20 calibrated fixed viewpoints, the 3D model may be rendered sequentially according to the 20 calibrated fixed viewpoints, and the visible pixels under each calibrated fixed viewpoint may be recorded, and the Z-coordinates of each visible pixel may be placed in the Z-buffer. And when the rendering of the 3D model is finished each time, the Z coordinate of each visible pixel of the image space stored in the Z cache region can be obtained, and the visual surface patch of the 3D model under each calibrated fixed viewpoint can be obtained according to the Z coordinate of each visible pixel of the image space stored in the Z cache region after each rendering. That is, the Z coordinates of the individual visible pixels of the 3D at each calibrated fixed viewpoint may be recorded into a corresponding set (denoted Si, where i=1..20), and the Z coordinates recorded in each set are the visible patches at the corresponding calibrated fixed viewpoint.
And S240, determining the closest multiple fixed viewpoints as multiple target fixed viewpoints.
That is, after determining a plurality of fixed viewpoints closest to the viewpoint corresponding to the current pose to be rendered from among the plurality of calibrated fixed viewpoints, the closest plurality of fixed viewpoints may be determined as a plurality of target fixed viewpoints.
S250, obtaining the visual surface patch to be rendered of the 3D model under each target fixed viewpoint.
In the embodiment of the application, according to each target fixed viewpoint, from the visual patches of the 3D model under each calibrated fixed viewpoint, the visual patches of the 3D model under each target fixed viewpoint are acquired, then the visual patches of the 3D model under each target fixed viewpoint are combined and de-duplicated, and the combined visual patches are determined to be the visual patches to be rendered of the 3D model under each target fixed viewpoint.
And S260, on a Central Processing Unit (CPU), rendering the 3D model according to the visual patches to be rendered of the 3D model under each target fixed viewpoint.
That is, the visual surface patch to be rendered of the 3D model at each target fixed viewpoint is obtained, and the 3D model can be rendered on the central processing unit CPU according to the visual surface patch to be rendered of the 3D model at each target fixed viewpoint.
According to the 3D model rendering method, the current gesture to be rendered can be determined, the viewpoint corresponding to the current gesture to be rendered is determined according to the current gesture to be rendered, a plurality of fixed viewpoints closest to the viewpoint corresponding to the current gesture to be rendered are determined from a plurality of calibrated fixed viewpoints, the closest fixed viewpoints are determined to be a plurality of target fixed viewpoints, then the visual surface patch to be rendered of the 3D model under each target fixed viewpoint is obtained, and then on a Central Processing Unit (CPU), the 3D model is rendered according to the visual surface patch to be rendered of the 3D model under each target fixed viewpoint. According to the method, a plurality of target fixed viewpoints can be determined from a plurality of calibrated fixed viewpoints, and according to the visual patches to be rendered under each target fixed viewpoint, the 3D model can be quickly rendered on a CPU, the visual patches can be efficiently calculated, resource waste caused by the fact that part of the patches are shielded by other patches during rendering is avoided, power consumption during operation on hardware is reduced, and rendering speed is accelerated; meanwhile, the CPU is adopted to render the 3D model, so that the technical problems of mobile phone heating, frequency reduction and the like caused by the operation of the AR application and a rendering engine (used for rendering the displayed model) together in the GPU are avoided, and the user experience is improved.
Fig. 5 is a flow chart of a 3D model rendering method according to another embodiment of the present application. As shown in fig. 5, the 3D model rendering method may include:
s510, determining a plurality of target fixed viewpoints from a plurality of calibrated fixed viewpoints according to the viewpoints corresponding to the current gesture to be rendered.
S520, obtaining the visual surface patch to be rendered of the 3D model under each target fixed viewpoint.
And S530, rendering the 3D model according to the visual patches to be rendered of the 3D model under each target fixed viewpoint.
According to the 3D model rendering method, a plurality of target fixed viewpoints can be determined from a plurality of calibrated fixed viewpoints according to the viewpoints corresponding to the current to-be-rendered gesture, then the to-be-rendered visual surface patch of the 3D model under each target fixed viewpoint is obtained, and then the 3D model is rendered according to the to-be-rendered visual surface patch of the 3D model under each target fixed viewpoint. The method can efficiently calculate the visual surface patch, avoids resource waste caused by shielding of part of the surface patch by other surface patches during rendering, reduces power consumption during operation on hardware, and accelerates rendering speed; meanwhile, the CPU is adopted to render the 3D model, so that the technical problems of mobile phone heating, frequency reduction and the like caused by the operation of the AR application and a rendering engine (used for rendering the displayed model) together in the GPU are avoided, and the user experience is improved.
Corresponding to the 3D model rendering methods provided in the foregoing several embodiments, an embodiment of the present application further provides a 3D model rendering device, and since the 3D model rendering device provided in the embodiment of the present application corresponds to the 3D model rendering method provided in the foregoing several embodiments, implementation of the 3D model rendering method is also applicable to the 3D model rendering device provided in the embodiment, and will not be described in detail in the present embodiment. Fig. 6 is a schematic structural view of a 3D model rendering apparatus according to an embodiment of the present application.
As shown in fig. 6, the 3D model rendering apparatus 600 includes: a determination module 610 of a pose to be rendered, a target viewpoint determination module 620, a visual patch to be rendered acquisition module 630, and a rendering module 640. Wherein:
the determination module 610 of the gesture to be rendered is configured to determine a gesture that is currently required to be rendered.
The target viewpoint determining module 620 is configured to determine a plurality of target fixed viewpoints from a plurality of calibrated fixed viewpoints according to the gesture to be rendered currently; as an example, the target viewpoint determining module is specifically configured to: determining a viewpoint corresponding to the current gesture to be rendered according to the gesture to be rendered currently; determining a plurality of fixed viewpoints closest to the viewpoint corresponding to the current gesture to be rendered from the plurality of calibrated fixed viewpoints; the closest plurality of fixed viewpoints is determined as the plurality of target fixed viewpoints.
In the embodiment of the present application, the target viewpoint determining module 620 is specifically configured to: determining a first connecting line between a viewpoint corresponding to the current gesture to be rendered and a center point of the 3D model; determining a second connecting line between each calibrated fixed viewpoint and the central point; calculating the included angle between the first connecting line and each second connecting line; and determining a plurality of fixed viewpoints closest to the viewpoint corresponding to the current gesture to be rendered from the plurality of calibrated fixed viewpoints according to the included angles between the first connecting line and each second connecting line.
The visual surface patch to be rendered obtaining module 630 is configured to obtain a visual surface patch to be rendered of the 3D model under each target fixed viewpoint; as an example, the visual patch to be rendered acquisition module 630 is specifically configured to: according to each target fixed viewpoint, obtaining a visual surface patch of the 3D model at each target fixed viewpoint from the visual surface patches of the 3D model at each calibrated fixed viewpoint; merging the visual patches of the 3D model under each target fixed viewpoint; and determining the combined visual patches as visual patches to be rendered of the 3D model under each target fixed viewpoint.
The rendering module 640 is configured to render, on a central processing unit CPU, the 3D model according to the visual panel to be rendered of the 3D model under the fixed viewpoint of each target.
In an embodiment of the present application, as shown in fig. 7, the 3D model rendering device further includes: a pre-calibration module 650, wherein the pre-calibration module 650 is configured to determine a spherical surface for the 3D model, where the spherical surface is a surface of a sphere that is centered on the 3D model and has a preset value as a radius; and uniformly selecting a plurality of points on the spherical surface, and determining the uniformly selected points as the calibrated fixed viewpoints.
In an embodiment of the present application, as shown in fig. 8, the 3D model rendering device further includes: and an acquisition module 660, wherein the acquisition module 660 is used for acquiring the visual patches of the 3D model under each calibrated fixed viewpoint in advance. As an example, the obtaining module 660 is specifically configured to: rendering the 3D model according to each calibrated fixed viewpoint in turn; when the rendering of the 3D model is completed each time, acquiring the Z coordinate of each visible pixel of the image space stored in the Z cache region; and obtaining the visual surface patch of the 3D model under each calibrated fixed viewpoint according to the Z coordinate of each visual pixel of the image space stored by the Z buffer after each rendering.
According to the 3D model rendering device, the current gesture required to be rendered can be determined, then a plurality of target fixed viewpoints are determined from a plurality of calibrated fixed viewpoints according to the current gesture required to be rendered, then a visual surface patch to be rendered of the 3D model under each target fixed viewpoint is obtained, and finally the 3D model is rendered on a Central Processing Unit (CPU) according to the visual surface patch to be rendered of the 3D model under each target fixed viewpoint. Therefore, the method can realize the rapid rendering of the 3D model on the CPU by determining a plurality of target fixed viewpoints from a plurality of calibrated fixed viewpoints and according to the visual patches to be rendered under each target fixed viewpoint, so that the resource waste caused by the shielding of part of patches by other patches during the rendering is avoided, the power consumption during the running on hardware is reduced, and the rendering speed is accelerated; meanwhile, the CPU is adopted to render the 3D model, so that the technical problems of mobile phone heating, frequency reduction and the like caused by the operation of the AR application and a rendering engine (used for rendering the displayed model) together in the GPU are avoided, and the user experience is improved.
According to embodiments of the present application, an electronic device and a readable storage medium are also provided.
As shown in fig. 9, a block diagram of an electronic device according to a 3D model rendering method according to an embodiment of the present application is shown. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the application described and/or claimed herein.
As shown in fig. 9, the electronic device includes: one or more processors 901, memory 902, and interfaces for connecting the components, including high-speed interfaces and low-speed interfaces. The various components are interconnected using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions executing within the electronic device, including instructions stored in or on memory to display graphical information of the GUI on an external input/output device, such as a display device coupled to the interface. In other embodiments, multiple processors and/or multiple buses may be used, if desired, along with multiple memories and multiple memories. Also, multiple electronic devices may be connected, each providing a portion of the necessary operations (e.g., as a server array, a set of blade servers, or a multiprocessor system). In fig. 9, a processor 901 is taken as an example.
Memory 902 is a non-transitory computer-readable storage medium provided herein. Wherein the memory stores instructions executable by the at least one processor to cause the at least one processor to perform the 3D model rendering methods provided herein. The non-transitory computer readable storage medium of the present application stores computer instructions for causing a computer to execute the 3D model rendering method provided by the present application.
The memory 902 is used as a non-transitory computer readable storage medium, and may be used to store non-transitory software programs, non-transitory computer executable programs, and modules, such as program instructions/modules corresponding to the 3D model rendering method in the embodiments of the present application (e.g., the determining module 610, the target viewpoint determining module 620, the visual patch obtaining module 630, and the rendering module 640 for pose to be rendered shown in fig. 6). The processor 901 performs various functional applications of the server and data processing, i.e., implements the 3D model rendering method in the above-described method embodiments, by running non-transitory software programs, instructions, and modules stored in the memory 902.
The memory 902 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, at least one application program required for a function; the storage data area may store data created by use of the electronic device rendered according to the 3D model, and the like. In addition, the memory 902 may include high-speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid-state storage device. In some embodiments, memory 902 optionally includes memory remotely located with respect to processor 901, which may be connected to the 3D model rendering electronic device via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The electronic device of the 3D model rendering method may further include: an input device 903 and an output device 904. The processor 901, memory 902, input devices 903, and output devices 904 may be connected by a bus or other means, for example in fig. 9.
The input device 903 may receive input numeric or character information and generate key signal inputs related to user settings and function control of the 3D model rendered electronic device, such as a touch screen, a keypad, a mouse, a track pad, a touch pad, a pointer stick, one or more mouse buttons, a track ball, a joystick, etc. input devices. The output means 904 may include a display device, auxiliary lighting means (e.g., LEDs), tactile feedback means (e.g., vibration motors), and the like. The display device may include, but is not limited to, a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) display, and a plasma display. In some implementations, the display device may be a touch screen.
Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, application specific ASIC (application specific integrated circuit), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
These computing programs (also referred to as programs, software applications, or code) include machine instructions for a programmable processor, and may be implemented in a high-level procedural and/or object-oriented programming language, and/or in assembly/machine language. As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or device (e.g., magnetic discs, optical disks, memory, programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
According to the technical scheme of the embodiment of the application, the gesture required to be rendered at present can be determined, then a plurality of target fixed viewpoints are determined from a plurality of calibrated fixed viewpoints according to the gesture required to be rendered at present, then a visual surface patch to be rendered of the 3D model under each target fixed viewpoint is obtained, and finally the 3D model is rendered on a Central Processing Unit (CPU) according to the visual surface patch to be rendered of the 3D model under each target fixed viewpoint. According to the method, the multiple target fixed viewpoints can be determined from the multiple calibrated fixed viewpoints, and the 3D model is quickly rendered on the CPU according to the visual patches to be rendered under each target fixed viewpoint, so that resource waste caused by shielding of part of patches by other patches during rendering is avoided, power consumption during running on hardware is reduced, and rendering speed is increased; meanwhile, the CPU is adopted to render the 3D model, so that the technical problems of mobile phone heating, frequency reduction and the like caused by the operation of the AR application and a rendering engine (used for rendering the displayed model) together in the GPU are avoided, and the user experience is improved.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps described in the present application may be performed in parallel, sequentially, or in a different order, provided that the desired results of the technical solutions disclosed in the present application can be achieved, and are not limited herein.
The above embodiments do not limit the scope of the application. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present application are intended to be included within the scope of the present application.

Claims (13)

1. A method of rendering a 3D model, comprising:
determining the current gesture to be rendered;
determining a plurality of target fixed viewpoints from a plurality of calibrated fixed viewpoints according to the current gesture to be rendered;
obtaining a visual surface patch to be rendered of the 3D model under each target fixed viewpoint;
rendering the 3D model according to the visual patches to be rendered of the 3D model at each target fixed viewpoint on a CPU (central processing unit);
Wherein the plurality of calibrated fixed viewpoints are obtained in advance by:
determining a spherical surface aiming at the 3D model, wherein the spherical surface is a spherical surface formed by taking the 3D model as a center and taking a preset value as a radius;
uniformly selecting a plurality of points on the spherical surface, and determining the uniformly selected points as the calibrated fixed viewpoints;
and determining a plurality of target fixed viewpoints from a plurality of calibrated fixed viewpoints according to the current gesture to be rendered, wherein the determining comprises the following steps:
determining a viewpoint corresponding to the current gesture to be rendered according to the gesture to be rendered currently;
determining a plurality of fixed viewpoints closest to the viewpoint corresponding to the current gesture to be rendered from the plurality of calibrated fixed viewpoints;
the closest plurality of fixed viewpoints is determined as the plurality of target fixed viewpoints.
2. The method of claim 1, wherein after obtaining the plurality of calibrated fixed viewpoints, the method further comprises:
and obtaining the visual surface patch of the 3D model under each calibrated fixed viewpoint in advance.
3. The method according to claim 2, wherein obtaining in advance a visual patch of the 3D model at each calibrated fixed viewpoint comprises:
Rendering the 3D model according to each calibrated fixed viewpoint in turn;
when the rendering of the 3D model is completed each time, acquiring the Z coordinate of each visible pixel of the image space stored in the Z cache region;
and obtaining the visual surface patch of the 3D model under each calibrated fixed viewpoint according to the Z coordinate of each visual pixel of the image space stored in the Z buffer after each rendering.
4. A method according to any one of claims 1 to 3, wherein obtaining a visual patch to be rendered of the 3D model at each target fixed viewpoint comprises:
according to each target fixed viewpoint, obtaining a visual surface patch of the 3D model at each target fixed viewpoint from the visual surface patches of the 3D model at each calibrated fixed viewpoint;
merging the visual patches of the 3D model under each target fixed viewpoint;
and determining the combined visual patches as visual patches to be rendered of the 3D model under each target fixed viewpoint.
5. The method of claim 1, wherein determining, from the plurality of calibrated fixed viewpoints, a plurality of fixed viewpoints closest to a viewpoint corresponding to the current pose to be rendered comprises:
Determining a first connecting line between a viewpoint corresponding to the current gesture to be rendered and a center point of the 3D model;
determining a second connecting line between each calibrated fixed viewpoint and the central point;
calculating the included angle between the first connecting line and each second connecting line;
and determining a plurality of fixed viewpoints closest to the viewpoint corresponding to the current gesture to be rendered from the plurality of calibrated fixed viewpoints according to the included angles between the first connecting line and each second connecting line.
6. A 3D model rendering apparatus, comprising:
the determining module of the gesture to be rendered is used for determining the gesture to be rendered currently;
the target viewpoint determining module is used for determining a plurality of target fixed viewpoints from a plurality of calibrated fixed viewpoints according to the current gesture to be rendered;
the visual surface patch to be rendered is used for obtaining the visual surface patch to be rendered of the 3D model under each target fixed viewpoint;
the rendering module is used for rendering the 3D model on a Central Processing Unit (CPU) according to the visual surface patch to be rendered of the 3D model under each target fixed viewpoint;
the device further comprises:
The pre-calibration module is used for determining a spherical surface aiming at the 3D model, wherein the spherical surface is a spherical surface formed by taking the 3D model as a center and taking a preset value as a radius; uniformly selecting a plurality of points on the spherical surface, and determining the uniformly selected points as the calibrated fixed viewpoints;
the target viewpoint determining module is specifically configured to:
determining a viewpoint corresponding to the current gesture to be rendered according to the gesture to be rendered currently;
determining a plurality of fixed viewpoints closest to the viewpoint corresponding to the current gesture to be rendered from the plurality of calibrated fixed viewpoints;
the closest plurality of fixed viewpoints is determined as the plurality of target fixed viewpoints.
7. The apparatus as recited in claim 6, further comprising:
the acquisition module is used for acquiring the visual patches of the 3D model under each calibrated fixed viewpoint in advance.
8. The apparatus of claim 7, wherein the acquisition module is specifically configured to:
rendering the 3D model according to each calibrated fixed viewpoint in turn;
when the rendering of the 3D model is completed each time, acquiring the Z coordinate of each visible pixel of the image space stored in the Z cache region;
And obtaining the visual surface patch of the 3D model under each calibrated fixed viewpoint according to the Z coordinate of each visual pixel of the image space stored in the Z buffer after each rendering.
9. The apparatus according to any one of claims 6 to 8, wherein the visual patch to be rendered acquisition module is specifically configured to:
according to each target fixed viewpoint, obtaining a visual surface patch of the 3D model at each target fixed viewpoint from the visual surface patches of the 3D model at each calibrated fixed viewpoint;
merging the visual patches of the 3D model under each target fixed viewpoint;
and determining the combined visual patches as visual patches to be rendered of the 3D model under each target fixed viewpoint.
10. The apparatus of claim 6, wherein the target viewpoint determination module is specifically configured to:
determining a first connecting line between a viewpoint corresponding to the current gesture to be rendered and a center point of the 3D model;
determining a second connecting line between each calibrated fixed viewpoint and the central point;
calculating the included angle between the first connecting line and each second connecting line;
and determining a plurality of fixed viewpoints closest to the viewpoint corresponding to the current gesture to be rendered from the plurality of calibrated fixed viewpoints according to the included angles between the first connecting line and each second connecting line.
11. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein, the liquid crystal display device comprises a liquid crystal display device,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the 3D model rendering method of any one of claims 1 to 5.
12. A non-transitory computer-readable storage medium storing computer instructions for causing the computer to perform the 3D model rendering method of any one of claims 1 to 5.
13. A method of rendering a 3D model, comprising:
determining a plurality of target fixed viewpoints from a plurality of calibrated fixed viewpoints according to viewpoints corresponding to the current gesture to be rendered;
obtaining a visual surface patch to be rendered of the 3D model under each target fixed viewpoint;
rendering the 3D model according to the visual patches to be rendered of the 3D model under each target fixed viewpoint;
wherein the plurality of calibrated fixed viewpoints are obtained in advance by:
Determining a spherical surface aiming at the 3D model, wherein the spherical surface is a spherical surface formed by taking the 3D model as a center and taking a preset value as a radius;
uniformly selecting a plurality of points on the spherical surface, and determining the uniformly selected points as the calibrated fixed viewpoints;
the determining a plurality of target fixed viewpoints from a plurality of calibrated fixed viewpoints according to viewpoints corresponding to the current gesture to be rendered includes:
determining a plurality of fixed viewpoints closest to the viewpoint corresponding to the current gesture to be rendered from the plurality of calibrated fixed viewpoints;
the closest plurality of fixed viewpoints is determined as the plurality of target fixed viewpoints.
CN202010118219.7A 2020-02-25 2020-02-25 3D model rendering method, device, equipment and storage medium Active CN111275803B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010118219.7A CN111275803B (en) 2020-02-25 2020-02-25 3D model rendering method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010118219.7A CN111275803B (en) 2020-02-25 2020-02-25 3D model rendering method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111275803A CN111275803A (en) 2020-06-12
CN111275803B true CN111275803B (en) 2023-06-02

Family

ID=71000381

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010118219.7A Active CN111275803B (en) 2020-02-25 2020-02-25 3D model rendering method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111275803B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114255315A (en) * 2020-09-25 2022-03-29 华为云计算技术有限公司 Rendering method, device and equipment
CN112489203A (en) * 2020-12-08 2021-03-12 网易(杭州)网络有限公司 Model processing method, model processing apparatus, electronic device, and storage medium
CN112562065A (en) * 2020-12-17 2021-03-26 深圳市大富网络技术有限公司 Rendering method, system and device of virtual object in virtual world

Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002032744A (en) * 2000-07-14 2002-01-31 Komatsu Ltd Device and method for three-dimensional modeling and three-dimensional image generation
JP2005063041A (en) * 2003-08-08 2005-03-10 Olympus Corp Three-dimensional modeling apparatus, method, and program
CN1885155A (en) * 2005-06-20 2006-12-27 钟明 Digital ball-screen cinema making method
JP2007200307A (en) * 2005-12-27 2007-08-09 Namco Bandai Games Inc Image generating device, program and information storage medium
CN101458824A (en) * 2009-01-08 2009-06-17 浙江大学 Hologram irradiation rendering method based on web
CN101529924A (en) * 2006-10-02 2009-09-09 株式会社东芝 Method, apparatus, and computer program product for generating stereoscopic image
CN103093491A (en) * 2013-01-18 2013-05-08 浙江大学 Three-dimensional model high sense of reality virtuality and reality combination rendering method based on multi-view video
CN104376552A (en) * 2014-09-19 2015-02-25 四川大学 Virtual-real registering algorithm of 3D model and two-dimensional image
EP3065109A2 (en) * 2015-03-06 2016-09-07 Sony Interactive Entertainment Inc. System, device and method of 3d printing
US9619105B1 (en) * 2014-01-30 2017-04-11 Aquifi, Inc. Systems and methods for gesture based interaction with viewpoint dependent user interfaces
WO2017113731A1 (en) * 2015-12-28 2017-07-06 乐视控股(北京)有限公司 360-degree panoramic displaying method and displaying module, and mobile terminal
CN107333121A (en) * 2017-06-27 2017-11-07 山东大学 The immersion solid of moving view point renders optical projection system and its method on curve screens
CN107481312A (en) * 2016-06-08 2017-12-15 腾讯科技(深圳)有限公司 A kind of image rendering and device based on volume drawing
CN108154548A (en) * 2017-12-06 2018-06-12 北京像素软件科技股份有限公司 Image rendering method and device
CN108920233A (en) * 2018-06-22 2018-11-30 百度在线网络技术(北京)有限公司 Panorama method for page jump, equipment and storage medium
CN108961371A (en) * 2017-05-19 2018-12-07 传线网络科技(上海)有限公司 Panorama starts page and APP display methods, processing unit and mobile terminal
CN109242943A (en) * 2018-08-21 2019-01-18 腾讯科技(深圳)有限公司 A kind of image rendering method, device and image processing equipment, storage medium
CN110163943A (en) * 2018-11-21 2019-08-23 深圳市腾讯信息技术有限公司 The rendering method and device of image, storage medium, electronic device

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7126603B2 (en) * 2003-05-30 2006-10-24 Lucent Technologies Inc. Method and system for creating interactive walkthroughs of real-world environment from set of densely captured images
US7456779B2 (en) * 2006-08-31 2008-11-25 Sierra Nevada Corporation System and method for 3D radar image rendering
TWI530157B (en) * 2013-06-18 2016-04-11 財團法人資訊工業策進會 Method and system for displaying multi-view images and non-transitory computer readable storage medium thereof
JP6040193B2 (en) * 2014-03-28 2016-12-07 富士フイルム株式会社 Three-dimensional direction setting device, method and program
JP2016162392A (en) * 2015-03-05 2016-09-05 セイコーエプソン株式会社 Three-dimensional image processing apparatus and three-dimensional image processing system
US10460512B2 (en) * 2017-11-07 2019-10-29 Microsoft Technology Licensing, Llc 3D skeletonization using truncated epipolar lines

Patent Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002032744A (en) * 2000-07-14 2002-01-31 Komatsu Ltd Device and method for three-dimensional modeling and three-dimensional image generation
JP2005063041A (en) * 2003-08-08 2005-03-10 Olympus Corp Three-dimensional modeling apparatus, method, and program
CN1885155A (en) * 2005-06-20 2006-12-27 钟明 Digital ball-screen cinema making method
JP2007200307A (en) * 2005-12-27 2007-08-09 Namco Bandai Games Inc Image generating device, program and information storage medium
CN101529924A (en) * 2006-10-02 2009-09-09 株式会社东芝 Method, apparatus, and computer program product for generating stereoscopic image
CN101458824A (en) * 2009-01-08 2009-06-17 浙江大学 Hologram irradiation rendering method based on web
CN103093491A (en) * 2013-01-18 2013-05-08 浙江大学 Three-dimensional model high sense of reality virtuality and reality combination rendering method based on multi-view video
US9619105B1 (en) * 2014-01-30 2017-04-11 Aquifi, Inc. Systems and methods for gesture based interaction with viewpoint dependent user interfaces
CN104376552A (en) * 2014-09-19 2015-02-25 四川大学 Virtual-real registering algorithm of 3D model and two-dimensional image
EP3065109A2 (en) * 2015-03-06 2016-09-07 Sony Interactive Entertainment Inc. System, device and method of 3d printing
WO2017113731A1 (en) * 2015-12-28 2017-07-06 乐视控股(北京)有限公司 360-degree panoramic displaying method and displaying module, and mobile terminal
CN107481312A (en) * 2016-06-08 2017-12-15 腾讯科技(深圳)有限公司 A kind of image rendering and device based on volume drawing
CN108961371A (en) * 2017-05-19 2018-12-07 传线网络科技(上海)有限公司 Panorama starts page and APP display methods, processing unit and mobile terminal
CN107333121A (en) * 2017-06-27 2017-11-07 山东大学 The immersion solid of moving view point renders optical projection system and its method on curve screens
CN108154548A (en) * 2017-12-06 2018-06-12 北京像素软件科技股份有限公司 Image rendering method and device
CN108920233A (en) * 2018-06-22 2018-11-30 百度在线网络技术(北京)有限公司 Panorama method for page jump, equipment and storage medium
CN109242943A (en) * 2018-08-21 2019-01-18 腾讯科技(深圳)有限公司 A kind of image rendering method, device and image processing equipment, storage medium
CN110163943A (en) * 2018-11-21 2019-08-23 深圳市腾讯信息技术有限公司 The rendering method and device of image, storage medium, electronic device

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
《Computing and Rendering Point Set Surfaces》;Marc Alexa 等;《IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS》;20030131;3-15 *
《R肘关节康复评估与训练实时渲染技术的研究》;曹庆豪;《中国优秀硕士学位论文全文数据库信息科技辑》;20190115;I138-4875 *
《Surface and Volume Rendering in Three-Dimensional Imaging: A Comparison》;Jayaram K. Udupa 等;《Journal of Digital imaging》;19910131;159-168 *
基于光线投射的全GPU实现的地形渲染算法;刘小聪等;《计算机仿真》;20100215(第02期);236-240 *
基于特征线条的三维模型检索方法;刘志等;《计算机辅助设计与图形学学报》;20160915(第09期);115-123 *
自主可控环境下三维海量态势显示优化方法;占伟伟等;《指挥信息系统与技术》;20190522(第02期);84-88 *

Also Published As

Publication number Publication date
CN111275803A (en) 2020-06-12

Similar Documents

Publication Publication Date Title
CN111275803B (en) 3D model rendering method, device, equipment and storage medium
JP2021061041A (en) Foveated geometry tessellation
CN110989878B (en) Animation display method and device in applet, electronic equipment and storage medium
CN111860167B (en) Face fusion model acquisition method, face fusion model acquisition device and storage medium
CN111722245B (en) Positioning method, positioning device and electronic equipment
CN112270669B (en) Human body 3D key point detection method, model training method and related devices
US11074437B2 (en) Method, apparatus, electronic device and storage medium for expression driving
CN109725956B (en) Scene rendering method and related device
US11120617B2 (en) Method and apparatus for switching panoramic scene
EP4231242A1 (en) Graphics rendering method and related device thereof
US10675538B2 (en) Program, electronic device, system, and method for determining resource allocation for executing rendering while predicting player's intent
CN115482325B (en) Picture rendering method, device, system, equipment and medium
CN110631603B (en) Vehicle navigation method and device
CN111291218B (en) Video fusion method, device, electronic equipment and readable storage medium
CN113870399B (en) Expression driving method and device, electronic equipment and storage medium
KR102432561B1 (en) Edge-based three-dimensional tracking and registration method and apparatus for augmented reality, and electronic device
CN111949816B (en) Positioning processing method, device, electronic equipment and storage medium
CN108369726B (en) Method for changing graphic processing resolution according to scene and portable electronic device
CN113129456A (en) Vehicle three-dimensional model deformation method and device and electronic equipment
CN116740242A (en) Motion vector optimization for multi-refractive and reflective interfaces
CN113486415B (en) Model perspective method, intelligent terminal and storage device
CN111898489B (en) Method and device for marking palm pose, electronic equipment and storage medium
CN114677469A (en) Method and device for rendering target image, electronic equipment and storage medium
CN113362438A (en) Panorama rendering method, device, electronic apparatus, medium, and program
CN113129457B (en) Texture generation method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant