CN116797708A - Virtual object rendering method, electronic device and storage medium - Google Patents

Virtual object rendering method, electronic device and storage medium Download PDF

Info

Publication number
CN116797708A
CN116797708A CN202310507011.8A CN202310507011A CN116797708A CN 116797708 A CN116797708 A CN 116797708A CN 202310507011 A CN202310507011 A CN 202310507011A CN 116797708 A CN116797708 A CN 116797708A
Authority
CN
China
Prior art keywords
rendering
vertex
virtual object
virtual
neural
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310507011.8A
Other languages
Chinese (zh)
Inventor
冉清
李凌志
申震
王光远
申丽
薄列峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Damo Institute Hangzhou Technology Co Ltd
Original Assignee
Alibaba Damo Institute Hangzhou Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Damo Institute Hangzhou Technology Co Ltd filed Critical Alibaba Damo Institute Hangzhou Technology Co Ltd
Priority to CN202310507011.8A priority Critical patent/CN116797708A/en
Publication of CN116797708A publication Critical patent/CN116797708A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/506Illumination models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality

Abstract

The application discloses a virtual object rendering method, electronic equipment and a storage medium. Wherein the method comprises the following steps: acquiring a shooting image obtained by shooting a physical object in a real environment; generating a surface grid of a virtual object based on the photographed image, wherein the virtual object is an associated object of the physical object mapped in the virtual environment; modeling reflected light of the surface of the solid object based on the surface grid to obtain a neural rendering model; and rendering the virtual object by utilizing the neural rendering model based on the vertex characteristics of the surface mesh to obtain a target rendering result. The application relates to the fields of augmented reality, three-dimensional reconstruction and the like, and solves the technical problem of poor rendering effect of rendering a virtual object in the related technology.

Description

Virtual object rendering method, electronic device and storage medium
Technical Field
The present application relates to the fields of augmented reality, three-dimensional reconstruction, and the like, and in particular, to a virtual object rendering method, an electronic device, and a storage medium.
Background
At present, when a rendering process is simulated through a neural network, rendering components are not distinguished, so that when a physical object with various materials and uneven illumination is processed, the rendering quality at a reflected light position is poor, and the effect of rendering a virtual object corresponding to the physical object is poor.
In view of the above problems, no effective solution has been proposed at present.
Disclosure of Invention
The embodiment of the application provides a virtual object rendering method, electronic equipment and a storage medium, which are used for at least solving the technical problem of poor rendering effect of rendering a virtual object in the related technology.
According to an aspect of an embodiment of the present application, there is provided a virtual object rendering method including: acquiring a shooting image obtained by shooting a physical object in a real environment; generating a surface grid of a virtual object based on the photographed image, wherein the virtual object is an associated object of the physical object mapped in the virtual environment; modeling reflected light of the surface of the solid object based on the surface grid to obtain a neural rendering model; and rendering the virtual object by utilizing the neural rendering model based on the vertex characteristics of the surface mesh to obtain a target rendering result.
According to an aspect of the embodiment of the present application, there is also provided a virtual object rendering method, including: responding to an input instruction acted on an operation interface, and displaying a shooting image on the operation interface, wherein the shooting image is obtained by shooting a physical object in a real environment; and responding to a rendering instruction acting on the operation interface, and displaying a target rendering result on the operation interface, wherein the target rendering result is based on the vertex characteristics of a surface grid of a virtual object, the virtual object is an associated object of the physical object mapped in the virtual environment, the virtual object is rendered by utilizing a nerve rendering model, the nerve rendering model is obtained by modeling reflected light of the surface of the physical object on the basis of the surface grid, and the surface grid is generated on the basis of a shooting image.
According to an aspect of the embodiment of the present application, there is also provided a virtual object rendering method, including: displaying a shooting image on a presentation picture of Virtual Reality (VR) equipment or Augmented Reality (AR) equipment, wherein the shooting image is obtained by shooting a physical object in a real environment; generating a surface grid of a virtual object based on the photographed image, wherein the virtual object is an associated object of the physical object mapped in the virtual environment; modeling reflected light of the surface of the solid object based on the surface grid to obtain a neural rendering model; rendering the virtual object by utilizing the neural rendering model based on the vertex characteristics of the surface mesh to obtain a target rendering result; and driving the VR device or the AR device to display the target rendering result.
According to an aspect of the embodiment of the present application, there is also provided a virtual object rendering method, including: acquiring a shooting image by calling a first interface, wherein the first interface comprises a first parameter, the parameter value of the first parameter is the shooting image, and the shooting image is obtained by shooting a physical object in a real environment; generating a surface grid of a virtual object based on the photographed image, wherein the virtual object is an associated object of the physical object mapped in the virtual environment; modeling reflected light of the surface of the solid object based on the surface grid to obtain a neural rendering model; rendering the virtual object by utilizing the neural rendering model based on the vertex characteristics of the surface mesh to obtain a target rendering result; and outputting a target rendering result by calling a second interface, wherein the second interface comprises a second parameter, and the parameter value of the second parameter is the target rendering result.
In the embodiment of the application, a shooting image obtained by shooting a physical object in a real environment is obtained; generating a surface grid of a virtual object based on the photographed image, wherein the virtual object is an associated object of the physical object mapped in the virtual environment; modeling reflected light of the surface of the solid object based on the surface grid to obtain a neural rendering model; based on the vertex characteristics of the surface mesh, the virtual object is rendered by utilizing the neural rendering model, a target rendering result is obtained, and the rendering effect of rendering the virtual object is improved. It is easy to note that the reflected light of the surface of the physical object can be modeled by combining the surface grid of the virtual object, so that a neural rendering model of the reflected light component is obtained, the sense of reality of the reflected light rendering can be effectively improved, the rendering effect of rendering the virtual object can be improved in the subsequent rendering process, and the technical problem of poor rendering effect of rendering the virtual object in the related art is solved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application, as claimed.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute a limitation on the application. In the drawings:
fig. 1 is a schematic view of a hardware environment of a virtual reality device of a virtual object rendering method according to an embodiment of the application;
FIG. 2 is a block diagram of a computing environment of a virtual object rendering method according to an embodiment of the application;
FIG. 3 is a flowchart of a virtual object rendering method according to embodiment 1 of the present application;
FIG. 4 is a schematic diagram of a reflective coding network according to an embodiment of the application;
FIG. 5 is a rendering structure diagram of a virtual object according to an embodiment of the application;
FIG. 6a is a schematic diagram of a neural rendering model, according to an embodiment of the present application;
FIG. 6b is a schematic diagram of a feature learning network according to an embodiment of the present application;
FIG. 6c is a schematic diagram of an integrated directional coding network according to an embodiment of the present application;
FIG. 6d is a schematic diagram of a roughness estimation network according to an embodiment of the present application;
FIG. 7 is a flowchart of a virtual object rendering method according to embodiment 2 of the present application;
FIG. 8 is a flowchart of a virtual object rendering method according to embodiment 3 of the present application;
FIG. 9 is a flowchart of a virtual object rendering method according to embodiment 4 of the present application;
fig. 10 is a schematic diagram of a virtual object rendering apparatus according to embodiment 5 of the present application;
fig. 11 is a schematic diagram of a virtual object rendering apparatus according to embodiment 6 of the present application;
fig. 12 is a schematic diagram of a virtual object rendering apparatus according to embodiment 7 of the present application;
fig. 13 is a schematic diagram of a virtual object rendering apparatus according to embodiment 8 of the present application;
fig. 14 is a block diagram of a computer terminal according to an embodiment of the present application.
Detailed Description
In order that those skilled in the art will better understand the present application, a technical solution in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present application without making any inventive effort, shall fall within the scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present application and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the application described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
First, partial terms or terminology appearing in the course of describing embodiments of the application are applicable to the following explanation:
reflection coding, which converts the reflection direction of light on the surface of an object into high latitude characteristics;
rasterizing, a process of changing a geometric grid into a two-dimensional image;
a geometric surface mesh, triangular patch manifestations of geometric surfaces;
Reverse rendering, which is a process of deducing more information about light, materials, geometric shapes and the like through a known image;
the reflected light is light generated by reflecting part of light after an object and a surface are illuminated;
neural rendering, which is a graph rendering method based on deep learning and artificial intelligence technology, can automatically learn the geometric structure, illumination, material and other attributes of a 3D scene from image data through training a neural network, and use the information to generate the 3D scene into a 2D image.
At present, a physical object can be modeled through two schemes, in the first scheme, a reverse rendering method can realize various rendering effects through estimation of each component of a graphic rendering model, and the scheme has the defects that a strict shooting environment is needed, images meeting requirements are acquired, accurate modeling is possible, the rendering model is loaded, long training time is often needed, and three-dimensional asset of the object is not utilized in a large scale; in the second scheme, refNerf performs reflected light component modeling by introducing a reflective code into volume rendering, and the scheme has the defects that the normal vector input of the reflective code generated by estimating volume of volume rendering density is inconsistent with the geometric definition of the normal vector, the training result is easy to influence, and the volume rendering training time field is unfavorable for three-dimensional asset realization of objects in a large scale.
The real-time object reconstruction nerve rendering method is an important research direction at present, and mainly improves the sense of reality of a rendered image, and the present nerve volume rendering method simulates a volume rendering process and does not distinguish rendering components, so that when real shot object data with various materials and uneven illumination is processed, the rendering quality at a specular light or high light position is often lower.
According to the application, the object surface geometry, namely the surface grid, is introduced into the nerve rendering method, the light intersection point can be rapidly positioned through the rasterization of the renderer, the rendering efficiency is greatly improved, and the purpose of displaying the nerve rendering effect in real time is achieved.
Example 1
According to an embodiment of the present application, there is provided a virtual object rendering method, it being noted that the steps shown in the flowchart of the drawings may be performed in a computer system such as a set of computer executable instructions, and that although a logical order is shown in the flowchart, in some cases the steps shown or described may be performed in an order different from that herein.
Fig. 1 is a schematic diagram of a hardware environment of a virtual reality device of a virtual object rendering method according to an embodiment of the application. As shown in fig. 1, the virtual reality device 104 is connected to the terminal 106, the terminal 106 is connected to the server 102 via a network, and the virtual reality device 104 is not limited to: the terminal 104 is not limited to a PC, a mobile phone, a tablet computer, etc., and the server 102 may be a server corresponding to a media file operator, and the network includes, but is not limited to: a wide area network, a metropolitan area network, or a local area network.
Optionally, the virtual reality device 104 of this embodiment includes: memory, processor, and transmission means. The memory is used to store an application program that can be used to perform: acquiring a shooting image obtained by shooting a physical object in a real environment; generating a surface grid of a virtual object based on the photographed image, wherein the virtual object is an associated object of the physical object mapped in the virtual environment; modeling reflected light of the surface of the solid object based on the surface grid to obtain a neural rendering model; based on the vertex characteristics of the surface mesh, the virtual object is rendered by utilizing the neural rendering model, and a target rendering result is obtained, so that the technical problem of poor rendering effect of rendering the virtual object in the related technology is solved.
The terminal of this embodiment may be configured to display the target rendering result on a presentation screen of a Virtual Reality (VR) device or an augmented Reality (Augmented Reality, AR) device, and send the target rendering result to the Virtual Reality device 104, where the Virtual Reality device 104 displays at a target delivery location after receiving the target rendering result.
Optionally, the HMD (Head Mount Display, head mounted display) head display and the eye tracking module of the virtual reality device 104 of this embodiment have the same functions as those of the above embodiment, that is, a screen in the HMD head display is used for displaying a real-time picture, and the eye tracking module in the HMD is used for acquiring a real-time motion track of an eyeball of a user. The terminal of the embodiment obtains the position information and the motion information of the user in the real three-dimensional space through the tracking system, and calculates the three-dimensional coordinates of the head of the user in the virtual three-dimensional space and the visual field orientation of the user in the virtual three-dimensional space.
The hardware architecture block diagram shown in fig. 1 may be used not only as an exemplary block diagram for an AR/VR device (or mobile device) as described above, but also as an exemplary block diagram for a server as described above, and in an alternative embodiment, fig. 2 shows in block diagram form one embodiment of a computing node in a computing environment 201 using an AR/VR device (or mobile device) as described above in fig. 1. Fig. 2 is a block diagram of a computing environment of a virtual object rendering method according to an embodiment of the present application, and as shown in fig. 2, the computing environment 201 includes a plurality of computing nodes (e.g., servers) running on a distributed network (shown as 210-1, 210-2, …). Different computing nodes contain local processing and memory resources and end user 202 may run applications or store data remotely in computing environment 201. The application may be provided as a plurality of services 220-1, 220-2, 220-3, and 220-4 in computing environment 201, representing services "A", "D", "E", and "H", respectively.
End user 202 may provide and access services through a web browser or other software application on a client, in some embodiments, provisioning and/or requests of end user 202 may be provided to portal gateway 230. Ingress gateway 230 may include a corresponding agent to handle provisioning and/or request for services (one or more services provided in computing environment 201).
Services are provided or deployed in accordance with various virtualization techniques supported by the computing environment 201. In some embodiments, services may be provided according to Virtual Machine (VM) based virtualization, container based virtualization, and/or the like. Virtual machine-based virtualization may be the emulation of a real computer by initializing a virtual machine, executing programs and applications without directly touching any real hardware resources. While the virtual machine virtualizes the machine, according to container-based virtualization, a container may be started to virtualize the entire Operating System (OS) so that multiple workloads may run on a single Operating System instance.
In one embodiment based on container virtualization, several containers of a service may be assembled into one Pod (e.g., kubernetes Pod). For example, as shown in FIG. 2, the service 220-2 may be equipped with one or more Pods 240-1, 240-2, …,240-N (collectively referred to as Pods). The Pod may include an agent 245 and one or more containers 242-1, 242-2, …,242-M (collectively referred to as containers). One or more containers in the Pod handle requests related to one or more corresponding functions of the service, and the agent 245 generally controls network functions related to the service, such as routing, load balancing, etc. Other services may also be equipped with similar Pod.
In operation, executing a user request from end user 202 may require invoking one or more services in computing environment 201, and executing one or more functions of one service may require invoking one or more functions of another service. As shown in FIG. 2, service "A"220-1 receives a user request of end user 202 from ingress gateway 230, service "A"220-1 may invoke service "D"220-2, and service "D"220-2 may request service "E"220-3 to perform one or more functions.
The computing environment may be a cloud computing environment, and the allocation of resources is managed by a cloud service provider, allowing the development of functions without considering the implementation, adjustment or expansion of the server. The computing environment allows developers to execute code that responds to events without building or maintaining a complex infrastructure. Instead of expanding a single hardware device to handle the potential load, the service may be partitioned to a set of functions that can be automatically scaled independently.
In the above-described operating environment, the present application provides a virtual object rendering method as shown in fig. 3. It should be noted that, the method for rendering the virtual object of this embodiment may be performed by the mobile terminal of the embodiment shown in fig. 1. Fig. 3 is a flowchart of a virtual object rendering method according to embodiment 1 of the present application. As shown in fig. 3, the method may include the steps of:
Step S302, acquiring a shooting image obtained by shooting a physical object in a real environment.
The physical object may be a person, an animal, an object, etc. in a real environment, and is not particularly limited herein, as long as the physical object exists in the real environment.
In an alternative embodiment, a shooting device can shoot a physical object in a real environment to obtain a shooting image; the camera can also record the physical object in the real environment to obtain video data, and the image of the physical object in the video data is obtained to obtain the shooting image; the shooting image obtained by shooting can be directly acquired in the process of shooting the physical object by the shooting equipment.
Step S304, based on the captured image, a surface mesh of the virtual object is generated.
Wherein the virtual object is an associated object of the physical object mapping in the virtual environment.
The surface mesh may be an object surface geometry.
In an alternative embodiment, the captured image may be predicted based on a hash-coded discrete spectrum decomposition (Spectral Decomposition on Frames, abbreviated as SDF) prediction algorithm to generate a surface grid of the virtual object. The meaning of discrete spectral decomposition is, among other things, to perform a time-frequency analysis on a signal, transform it into a set of matrix representations, and predict future trends from these matrices. To ensure the subsequent rendering quality, the SDF resolution and vertices of the surface mesh may be defined, for example, the SDF Volume resolution may be set to 512 and vertices of the surface mesh may be set to 100 ten thousand, and the numerical values herein are merely exemplary and not limiting.
Step S306, modeling the reflected light of the surface of the solid object based on the surface mesh to obtain a neural rendering model.
The reflected light may be light reflected from the surface of the solid object when the solid object is illuminated, such as high light, specular light, etc., and may be light rays of the solid object surface itself, for example only.
In an alternative embodiment, the reflected light of the surface of the solid object may be modeled according to the vertex characteristics and vertex normal vectors in the surface mesh, resulting in a neural rendering model. Further, vertex characteristics and vertex normal vectors with higher accuracy can be obtained by means of surface grids so as to ensure the accuracy of subsequent reflection codes, relatively important sampling points in the surface of a solid object can be determined by utilizing the rasterization capability of a renderer, factors of the surface roughness and the reflected light direction of the object are reserved, the capability of nerve rendering is utilized, a light-weight specular light/high-light renderer is fitted, the three-dimensional modeling capability of a reflective material is improved, and a nerve rendering model can be built efficiently and with high quality. The rasterization capability may refer to the capability of the computer graphics card to discretize graphics data.
Step S308, based on the vertex characteristics of the surface mesh, rendering the virtual object by using the neural rendering model to obtain a target rendering result.
In an alternative embodiment, the reflected light on the surface of the virtual object can be rendered by using the neural rendering model based on the vertex characteristics of the surface mesh, so that the rendering sense of reality of the reflected light can be effectively improved, the rendering quality effect on the object with high-gloss material is better, and the user is more immersed.
Through the steps, firstly, a shooting image obtained by shooting an entity object in a real environment is obtained, a surface grid of a virtual object is generated based on the shooting image, wherein the virtual object is an associated object of the entity object mapped in the virtual environment, reflected light of the surface of the entity object is modeled based on the surface grid to obtain a neural rendering model, the virtual object is rendered by the neural rendering model based on the vertex characteristics of the surface grid to obtain a target rendering result, and the rendering effect of the virtual object is improved. It is easy to note that the reflected light of the surface of the physical object can be modeled by combining the surface grid of the virtual object, so that a neural rendering model of the reflected light component is obtained, the sense of reality of the reflected light rendering can be effectively improved, the rendering effect of rendering the virtual object can be improved in the subsequent rendering process, and the technical problem of poor rendering effect of rendering the virtual object in the related art is solved.
In the above embodiment of the present application, modeling reflected light on a surface of a solid object based on a surface mesh to obtain a neural rendering model includes: obtaining vertex characteristics and vertex normal vectors of the surface grid, wherein the vertex characteristics are used for representing the characteristics of the vertices of the surface grid, and the vertex normal vectors are used for representing the normal vectors of the vertices of the surface grid; feature learning is carried out on the vertex features to obtain first vertex rendering features; carrying out reflection coding on the vertex characteristics and vertex normal vectors by utilizing a reflection coding network to obtain reflection characteristics; and modeling the reflected light based on at least the first vertex rendering features and the reflection features to obtain a neural rendering model.
The vertex characteristics of the surface mesh may be characteristics of a preset dimension, where the preset dimension may be 8 dimensions.
In an alternative embodiment, feature learning can be performed on the vertex features by using a feature learning network to obtain a first vertex rendering feature, reflection encoding can be performed on the vertex features and vertex normal vectors by using a reflection encoding network to obtain reflection features, and optionally, 20-dimensional feature vectors can be used as the reflection features; the first vertex rendering feature, the reflection feature and the viewpoint feature vector obtained by encoding according to the viewpoint position can be used for modeling the reflected light, so that a neural rendering model is obtained.
The above-described neural rendering model includes three layers of multi-layer perceptrons (Multilayer Perceptron, abbreviated as MLP) and activation functions.
In the above embodiment of the present application, the reflective coding network includes: the roughness estimation module and the coding module, wherein, the reflection coding network is utilized to carry out reflection coding on the vertex characteristics and the vertex normal vector, and the reflection characteristics are obtained, and the roughness estimation module comprises: performing roughness estimation on the vertex characteristics by using a roughness estimation module to obtain vertex roughness; determining a reflection direction of the surface mesh based at least on the vertex normal vector; and encoding the vertex roughness and the reflection direction by using an encoding module to obtain reflection characteristics.
FIG. 4 is a schematic diagram of a reflective coding network according to an embodiment of the application, as shown in FIG. 4, in which vertex characteristics can be usedInputting into a roughness estimation module for roughness estimation to obtain vertex roughness rho, which can be based onThe vertex normal vector and the viewpoint position determine the reflection direction.
Furthermore, the vertex roughness and the reflection direction can be input into the coding module together to obtain the high-frequency reflection characteristic, wherein the coding module can be an integrated directional coding network, the high-frequency reflection characteristic can be a 20-dimensional characteristic vector, it should be noted that only an example is provided here, and the dimensions of the coding module and the characteristic vector which are specifically used can be adjusted according to actual situations.
In an alternative embodiment, the reflection direction of the surface mesh can be directly obtained by solving based on the normal vector of the vertex through a geometric method, and the light weight modeling is carried out on the reflected light by combining with the roughness of the vertex, so that the processing capacity of the common material with reflection on the physical object is improved on the premise of ensuring the rendering in the verification process, and the robustness of modeling the physical object is enhanced.
In the above embodiment of the present application, determining the reflection direction of the surface mesh based at least on the vertex normal vector includes: acquiring a viewpoint position corresponding to a shot image; the reflection direction is determined based on the viewpoint position and the vertex normal vector.
The viewpoint position may be a position where the photographing apparatus photographs the physical object. Alternatively, it may be the spatial location where the photographing apparatus or observer is located.
In an alternative embodiment, the reflection direction may be determined based on the vertex normal vector in combination with the viewpoint position, so as to obtain a reflection direction with higher accuracy.
In another alternative embodiment, the reflection directions of different sampling points of the object surface can be calculated specifically by the following formula:
wherein, the liquid crystal display device comprises a liquid crystal display device,for representing the reflection direction, viewdir for representing the viewpoint position,/ >For representing the vertex normal vector.
In the above embodiment of the present application, modeling the reflected light based on at least the first vertex rendering feature and the reflection feature to obtain a neural rendering model includes: acquiring a viewpoint position corresponding to a shot image; extracting the characteristics of the viewpoint positions to obtain viewpoint characteristic vectors; and modeling the reflected light based on the first vertex rendering feature, the reflection feature and the viewpoint feature vector to obtain a neural rendering model.
The viewpoint feature vector may be a viewpoint feature vector based on spherical harmonics, which are a set of orthogonal basis functions having specific properties on a sphere, for describing points and vectors in spherical space.
In an alternative embodiment, a viewpoint position corresponding to the captured image, that is, a position where the capturing device for capturing the captured image is located, may be obtained, feature extraction may be performed on the viewpoint position based on spherical harmonics, so as to obtain a viewpoint feature bell based on spherical harmonics, and the first vertex rendering feature, the reflection feature and the viewpoint feature vector may be modeled to obtain the neural rendering model.
In the above embodiment of the present application, modeling the reflected light based on the first vertex rendering feature, the reflection feature and the viewpoint feature vector to obtain a neural rendering model includes: inputting the first vertex rendering feature, the reflection feature and the viewpoint feature vector into an initial rendering model, and obtaining an initial rendering result output by the initial rendering model; and adjusting model parameters of the initial rendering model based on the similarity between the initial rendering result and the photographed image to obtain a neural rendering model.
The initial rendering model described above may be used to represent a model that has not been trained or a model that has not been trained.
In an alternative embodiment, the first vertex rendering feature, the reflection feature and the viewpoint feature vector may be input into an initial rendering model, the initial rendering model may obtain an initial rendering result by using a multi-layer perceptron and an activation function, model parameters of the initial rendering model may be adjusted by comparing a similarity between the initial rendering result and a captured image, and the model parameters of the initial rendering model may need to be adjusted when the similarity between the initial rendering result and the captured image is less than a preset threshold, until the similarity between the initial rendering result and the captured image is greater than or equal to the preset threshold, so as to obtain the neural network model.
In the above embodiment of the present application, feature learning is performed on vertex features to obtain first vertex rendering features, including: and performing feature learning on the vertex features by using a feature learning network to obtain first vertex rendering features.
The feature learning network may be an artificial neural network model, and features related to modeling may be automatically extracted from vertex features and mapped to output results.
In an alternative embodiment, feature learning may be performed on the vertex features using a feature learning network to extract a representative first vertex rendering feature, thereby improving the efficiency of subsequently building a neural rendering model.
In the above embodiment of the present application, obtaining the vertex normal vector includes: and carrying out normal vector estimation on the vertex characteristics by using a normal vector estimation network to obtain vertex normal vectors.
The above-mentioned normal vector estimation network is a machine learning algorithm for deducing normal vector information from a three-dimensional scene or object, and the algorithm predicts the local normal direction at the vertex by using a deep convolutional neural network, and then generates a global shape surface grid and related information, usually by a dense or sparse reconstruction method.
In an alternative embodiment, the normal vector obtained from geometric calculation can be used as loss supervision through introducing a normal vector estimation network, and the normal vector estimation result is restrained, so that the normal vector of the vertex is ensured to be correct and smooth and continuous, and the problem that the follow-up rendering result is poor in local continuity due to noise of the normal vector of the vertex is avoided.
In the above embodiment of the present application, based on the vertex characteristics of the surface mesh, the virtual object is rendered by using the neural rendering model to obtain the target rendering result, including: feature learning is carried out on the vertex features of the surface mesh to obtain second vertex rendering features; and rendering the virtual object based on the second vertex rendering characteristics by using the neural rendering model to obtain a target rendering result.
In an alternative embodiment, feature learning can be performed on the surface mesh by using a feature learning network to obtain second vertex rendering features, and rendering is performed on the surface of the virtual object based on the second vertex rendering features by using a neural rendering model to obtain a target rendering result.
In the above embodiment of the present application, the method further includes: reading the second vertex rendering characteristics and the neural network model stored in the storage device; and rendering the virtual object based on the second vertex rendering characteristics by using the neural rendering model to obtain a target rendering result.
The storage device may be a renderer.
In order to improve the rendering efficiency of the neural network model, the second vertex rendering feature and the neural network model in the renderer can be directly read, then the second vertex rendering feature is subjected to linear operation in the renderer by utilizing the neural network model, and the second vertex rendering feature is converted into a color, so that a target rendering result is obtained.
In the above embodiment of the present application, the method further includes: simplifying the surface grid to obtain a simplified grid; mapping the simplified grid to a two-dimensional image space to obtain a two-dimensional mapping result; dense sampling is carried out on the two-dimensional mapping result, and two-dimensional sampling points are obtained; mapping the two-dimensional sampling points to a virtual space to obtain three-dimensional sampling points; and obtaining vertex characteristics based on the characteristics of the vertices corresponding to the three-dimensional sampling points in the surface grid.
In an alternative embodiment, the high-resolution surface mesh may be simplified, the number of geometric patches of the obtained simplified mesh is about 10k, where 10k is only an example, geometric patch data of the simplified mesh may be determined according to practical situations, without specific limitation, the simplified mesh may be simplified to save storage space, the simplified mesh may be mapped to a two-dimensional image space by a texture mapping method (Blender) to obtain a two-dimensional mapping result, dense sampling may be performed on the two-dimensional mapping result to obtain two-dimensional sampling points, and it is noted that the number of two-dimensional sampling points is far greater than 10k, different two-dimensional sampling points may be mapped to a virtual space according to a mapping relationship to obtain three-dimensional sampling points, and vertex features in the surface mesh may be determined based on features of vertices corresponding to the three-dimensional sampling points in the surface mesh, and vertex features in the surface mesh may be stored by a feature map manner.
Fig. 5 is a rendering structure diagram of a virtual object according to an embodiment of the present application, and includes three stages as shown in fig. 5, where the first stage is to generate a surface mesh of the virtual object, the second stage is to train a neural rendering model with reflection coding, and the third stage is to output a three-dimensional model.
In the first stage, a shooting image obtained by shooting a physical object by a user is used as an algorithm input, and a surface grid of a virtual object is obtained through SDF prediction based on hash coding.
The second stage can acquire vertex characteristics and vertex normal vectors of the surface grid, can utilize a characteristic learning network to perform characteristic learning on the vertex characteristics to obtain first vertex rendering characteristics, can utilize a reflection coding network to perform reflection coding on the vertex characteristics, the vertex normal vectors and viewpoint positions to obtain reflection characteristics, can model reflected light based on the first vertex rendering characteristics and the reflection characteristics to obtain a neural rendering model, and utilizes the neural rendering model to render a virtual object to obtain a target rendering result. Optionally, in the reflection encoding process, the roughness estimation network may be used to perform roughness estimation on the vertex feature to obtain vertex roughness, the reflection direction of the surface mesh may be determined based on the vertex normal vector, and the integrated directional encoding network may be used to encode the vertex roughness and the reflection direction to obtain the reflection feature.
In the third stage, network parameters of the neural rendering model in the second stage can be exported to be in a three-dimensional format so as to be compatible with a renderer, in order to save storage space, a high-resolution surface grid during training can be simplified to obtain a simplified grid, the geometric surface patch number of the simplified grid is about 10K, the geometric surface patch number of the simplified grid can be set according to actual requirements, the simplified grid can be corresponding to a two-dimensional image space through a texture mapping method, dense sampling can be performed in the two-dimensional image space, different two-dimensional sampling points can be converted to three-dimensional points according to a mapping relation, vertex rendering features can be obtained through a feature learning network, the vertex rendering features can be stored in the two-dimensional image space, and it is required to be explained that 8-dimensional features need to be stored in two images of 4 channels, which are only examples and are not particularly limited.
In the third stage, since the number of network parameters of the neural rendering model in the second stage is small, the parameters can be stored as network parameters json, and when model reasoning is performed for rendering, the parameters can be directly read to a renderer, and then linear operation is performed in the renderer to convert the characteristics into colors.
Fig. 6a is a schematic diagram of a neural rendering model according to an embodiment of the present application, as shown in fig. 6a, the first vertex rendering feature, reflection feature and viewpoint feature vector may obtain a rendered image of an object through neural rendering simulated by three layers of MLPs and an activation function, where the three layers of MLPs may be 44×32, 32×32, and 32×3, respectively, and may include two activation functions (Rectifying Linear Unit, abbreviated as RELU) and one activation function (Sigmoid). The difference between the rendered image and the training image may be used to train the neural rendering model. Wherein the reflection feature may be a reflection encoded high frequency feature and the view feature vector may be a view feature vector based on spherical harmonics.
Fig. 6b is a schematic diagram of a feature learning network according to an embodiment of the present application, as shown in fig. 6b, the feature learning network may include a layer of hash codes, a layer of MLPs, and an activation function (Sigmoid), where a layer of MLPs may be 32×8.
Fig. 6c is a schematic diagram of an integrated directional coding network according to an embodiment of the present application, as shown in fig. 6c, the integrated directional coding network may include a layer 17×32 MLP, an activation function (relu), a layer 32×30 MLP, and an activation function (relu).
Fig. 6d is a schematic diagram of a roughness estimation network according to an embodiment of the present application, as shown in fig. 6d, the roughness estimation network may include a layer 8×8 MLP, an activation function (relu), a layer 8×1 MLP, and an activation function (sigmiod).
It should be noted that, the user information (including but not limited to user equipment information, user personal information, etc.) and the data (including but not limited to data for analysis, stored data, presented data, etc.) related to the present application are information and data authorized by the user or fully authorized by each party, and the collection, use and processing of the related data need to comply with the related laws and regulations and standards of the related country and region, and provide corresponding operation entries for the user to select authorization or rejection.
It should be noted that, for simplicity of description, the foregoing method embodiments are all described as a series of acts, but it should be understood by those skilled in the art that the present application is not limited by the order of acts described, as some steps may be performed in other orders or concurrently in accordance with the present application. Further, those skilled in the art will also appreciate that the embodiments described in the specification are all preferred embodiments, and that the acts and modules referred to are not necessarily required for the present application.
From the description of the above embodiments, it will be clear to a person skilled in the art that the method according to the above embodiments may be implemented by means of software plus a necessary general hardware platform, but that it may also be implemented by means of hardware. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising several instructions for causing a terminal device (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method of the various embodiments of the present application.
Example 2
According to an embodiment of the present application, there is also provided a method of rendering a virtual object, it being noted that the steps shown in the flowchart of the drawings may be performed in a computer system such as a set of computer executable instructions, and that although a logical order is shown in the flowchart, in some cases the steps shown or described may be performed in an order different from that herein.
Fig. 7 is a flowchart of a virtual object rendering method according to embodiment 2 of the present application, as shown in fig. 7, the method including the steps of:
In step S702, a captured image is displayed on the operation interface in response to an input instruction acting on the operation interface.
The shooting image is obtained by shooting a physical object in a real environment.
The operation interface may be a display interface capable of displaying the captured image, and the captured image may be displayed by performing a related touch operation on the display interface.
Step S704, in response to the rendering instruction acting on the operation interface, displaying the target rendering result on the operation interface.
The target rendering result is obtained by rendering the virtual object by using a neural rendering model based on vertex characteristics of a surface mesh of the virtual object, the virtual object is an associated object of the physical object mapped in the virtual environment, the neural rendering model is obtained by modeling reflected light of the surface of the physical object based on the surface mesh, and the surface mesh is generated based on the photographed image.
The rendering instruction may be an instruction generated by touching the operation interface when the virtual object needs to be rendered, and the target rendering result may be displayed on the operation interface according to the rendering instruction.
In an alternative embodiment, the shot image may be input into an input box on the operation interface, alternatively, the shot image may be input into the input box of the operation interface by dragging or uploading, after the shot image is dragged to the input box, a subsequent operation may be performed on the shot image through a background to obtain a target rendering result, and the target rendering result may be displayed in the output box of the operation interface. The virtual object can be rendered according to the shot image in a user interaction mode, and the target rendering result is displayed on the operation interface in a user interaction mode.
Through the steps, the shooting image is displayed on the operation interface in response to the input instruction acted on the operation interface, wherein the shooting image is obtained by shooting an entity object in the real environment, and the target rendering result is displayed on the operation interface in response to the rendering instruction acted on the operation interface, wherein the target rendering result is based on the vertex characteristics of the surface grid of the virtual object, the virtual object is an associated object of the entity object mapped in the virtual environment, the virtual object is rendered by using a nerve rendering model, the nerve rendering model is obtained by modeling reflected light on the surface of the entity object based on the surface grid, and the surface grid is generated based on the shooting image, so that the rendering effect of rendering the virtual object is improved. It is easy to note that the reflected light of the surface of the physical object can be modeled by combining the surface grid of the virtual object, so that a neural rendering model of the reflected light component is obtained, the sense of reality of the reflected light rendering can be effectively improved, the rendering effect of rendering the virtual object can be improved in the subsequent rendering process, and the technical problem of poor rendering effect of rendering the virtual object in the related art is solved.
It should be noted that, the preferred embodiment of the present application in the above examples is the same as the embodiment provided in example 1, the application scenario and the implementation process, but is not limited to the embodiment provided in example 1.
Example 3
There is also provided, in accordance with an embodiment of the present application, a method of rendering virtual objects in a virtual reality scene that may be applied to a virtual reality VR device, an augmented reality AR device, or the like, it being noted that the steps illustrated in the flowchart of the accompanying drawings may be performed in a computer system such as a set of computer executable instructions, and that although a logical order is illustrated in the flowchart, in some cases the steps illustrated or described may be performed in an order other than that illustrated herein.
Fig. 8 is a flowchart of a virtual object rendering method according to embodiment 3 of the present application, as shown in fig. 8, the method including the steps of:
step S802, a captured image is displayed on a presentation screen of a virtual reality VR device or an augmented reality AR device.
The shooting image is obtained by shooting a physical object in a real environment.
Step S804, based on the captured image, generates a surface mesh of the virtual object.
Wherein the virtual object is an associated object of the physical object mapping in the virtual environment.
Step S806, modeling the reflected light of the surface of the solid object based on the surface mesh to obtain a neural rendering model.
Step S808, based on the vertex characteristics of the surface mesh, rendering the virtual object by using the neural rendering model to obtain a target rendering result.
Step S810, driving the VR device or the AR device to display the target rendering result.
Through the steps, the shooting image is displayed on the display picture of the virtual reality VR device or the augmented reality AR device, wherein the shooting image is obtained by shooting an entity object in a real environment, a surface grid of the virtual object is generated based on the shooting image, the virtual object is an associated object of the entity object mapped in the virtual environment, reflected light of the surface of the entity object is modeled based on the surface grid, a neural rendering model is obtained, the virtual object is rendered by utilizing the neural rendering model based on the vertex characteristics of the surface grid, a target rendering result is obtained, and the VR device or the AR device is driven to display the target rendering result, so that the rendering effect of rendering the virtual object is improved. It is easy to note that the reflected light of the surface of the physical object can be modeled by combining the surface grid of the virtual object, so that a neural rendering model of the reflected light component is obtained, the sense of reality of the reflected light rendering can be effectively improved, the rendering effect of rendering the virtual object can be improved in the subsequent rendering process, and the technical problem of poor rendering effect of rendering the virtual object in the related art is solved.
Alternatively, in the present embodiment, the method for rendering the virtual object described above may be applied to a hardware environment formed by a server and a virtual reality device. The material information is displayed on a presentation screen of the virtual reality VR device or the augmented reality AR device, and the server may be a server corresponding to a media file operator, where the network includes but is not limited to: the virtual reality device is not limited to a wide area network, a metropolitan area network, or a local area network: virtual reality helmets, virtual reality glasses, virtual reality all-in-one machines, and the like.
Optionally, the virtual reality device comprises: memory, processor, and transmission means. The memory is used to store an application program that can be used to perform: displaying an image set on a presentation picture of a Virtual Reality (VR) device or an Augmented Reality (AR) device, wherein the image set is obtained by shooting physical objects in a plurality of real environments, and at least comprises: the system comprises a first image and a second image, wherein the first image is an image obtained by shooting an entity object in an environment illumination environment, and the second image is an image obtained by shooting the entity object in a point light source illumination environment; performing three-dimensional reconstruction on the entity object based on the first image to generate a three-dimensional grid corresponding to the virtual object, wherein the virtual object is an associated object of the entity object mapped in the virtual scene; reconstructing the material of the surface of the entity object based on the three-dimensional grid and the second image to obtain material information required to be covered on the surface of the virtual object; and driving the VR device or the AR device to render the material information.
It should be noted that, the method for rendering a virtual object applied to VR device or AR device in this embodiment may include the method of the embodiment shown in fig. 8, so as to achieve the purpose of driving VR device or AR device to display material information.
Alternatively, the processor of this embodiment may call the application program stored in the memory through the transmission device to perform the above steps. The transmission device can receive the media file sent by the server through the network and can also be used for data transmission between the processor and the memory.
Optionally, in the virtual reality device, a head-mounted display with eye tracking is provided, a screen in the head-mounted display of the HMD is used for displaying a video picture displayed, an eye tracking module in the HMD is used for acquiring real-time motion tracks of eyes of the user, a tracking system is used for tracking position information and motion information of the user in a real three-dimensional space, a calculation processing unit is used for acquiring real-time position and motion information of the user from the tracking system, and calculating three-dimensional coordinates of the head of the user in the virtual three-dimensional space, visual field orientation of the user in the virtual three-dimensional space and the like.
In the embodiment of the present application, the virtual reality device may be connected to a terminal, where the terminal and the server are connected through a network, and the virtual reality device is not limited to: the terminal is not limited to a PC, a mobile phone, a tablet PC, etc., and the server may be a server corresponding to a media file operator, and the network includes but is not limited to: a wide area network, a metropolitan area network, or a local area network.
It should be noted that, the preferred embodiment of the present application in the above examples is the same as the embodiment provided in example 1, the application scenario and the implementation process, but is not limited to the embodiment provided in example 1.
Example 4
According to an embodiment of the present application, there is also provided a method of rendering a virtual object, it being noted that the steps shown in the flowchart of the drawings may be performed in a computer system such as a set of computer executable instructions, and that although a logical order is shown in the flowchart, in some cases the steps shown or described may be performed in an order different from that herein.
Fig. 9 is a flowchart of a virtual object rendering method according to embodiment 4 of the present application, as shown in fig. 9, the method including the steps of:
in step S902, a captured image is acquired by calling the first interface.
The first interface comprises a first parameter, the parameter value of the first parameter is a shooting image, and the shooting image is obtained by shooting a physical object in a real environment.
The first interface may be an interface for performing data interaction between the server and the client, and the client may transmit the captured image to the interface function as a first parameter of the interface function, so as to achieve the purpose of uploading the captured image to the cloud server.
Step S904, generating a surface mesh of the virtual object based on the captured image.
Wherein the virtual object is an associated object of the physical object mapping in the virtual environment.
Step S906, modeling the reflected light of the surface of the solid object based on the surface mesh to obtain a neural rendering model.
Step S908, based on the vertex characteristics of the surface mesh, rendering the virtual object by using the neural rendering model to obtain a target rendering result.
Step S910, outputting the target rendering result by calling the second interface.
The second interface includes a second parameter, and a parameter value of the second parameter is a target rendering result.
The second interface in the above step may be an interface for exchanging data between the cloud server and the client, where the cloud server may transmit the target rendering result to the interface function, and serve as a second parameter of the interface function, to achieve the purpose of issuing the target rendering result to the client.
Through the steps, a shooting image is obtained by calling a first interface, wherein the first interface comprises a first parameter, the parameter value of the first parameter is the shooting image, and the shooting image is obtained by shooting a physical object in a real environment; generating a surface grid of a virtual object based on the photographed image, wherein the virtual object is an associated object of the physical object mapped in the virtual environment; modeling reflected light of the surface of the solid object based on the surface grid to obtain a neural rendering model; rendering the virtual object by utilizing the neural rendering model based on the vertex characteristics of the surface mesh to obtain a target rendering result; and outputting a target rendering result by calling a second interface, wherein the second interface comprises a second parameter, and the parameter value of the second parameter is the target rendering result, so that the rendering effect of rendering the virtual object is improved. It is easy to note that the reflected light of the surface of the physical object can be modeled by combining the surface grid of the virtual object, so that a neural rendering model of the reflected light component is obtained, the sense of reality of the reflected light rendering can be effectively improved, the rendering effect of rendering the virtual object can be improved in the subsequent rendering process, and the technical problem of poor rendering effect of rendering the virtual object in the related art is solved.
It should be noted that, the preferred embodiment of the present application in the above examples is the same as the embodiment provided in example 1, the application scenario and the implementation process, but is not limited to the embodiment provided in example 1.
Example 5
According to an embodiment of the present application, there is further provided a virtual object rendering apparatus for implementing the virtual object rendering method, and fig. 10 is a schematic diagram of a virtual object rendering apparatus according to embodiment 5 of the present application, as shown in fig. 10, the apparatus 1000 includes: an acquisition module 1002, a generation module 1004, a modeling module 1006, and a rendering module 1008.
The acquisition module is used for acquiring a shooting image obtained by shooting a physical object in a real environment; the generation module is used for generating a surface grid of the virtual object based on the shot image, wherein the virtual object is an associated object of the physical object mapped in the virtual environment; the modeling module is used for modeling the reflected light of the surface of the solid object based on the surface grid to obtain a neural rendering model; the rendering module is used for rendering the virtual object by utilizing the neural rendering model based on the vertex characteristics of the surface mesh to obtain a target rendering result.
Here, the acquiring module 1002, the generating module 1004, the modeling module 1006, and the rendering module 1008 correspond to steps S302 to S308 in embodiment 1, and the four modules are the same as the examples and application scenarios implemented by the corresponding steps, but are not limited to those disclosed in embodiment 1. It should be noted that the above-mentioned modules or units may be hardware components or software components stored in a memory (for example, the memory 104) and processed by one or more processors (for example, the processors 102a,102b, … …,102 n), and the above-mentioned modules may also be executed as a part of the apparatus in the computer terminal 10 provided in embodiment 1.
In the above embodiment of the present application, the modeling module is further configured to obtain a vertex feature and a vertex normal vector of the surface mesh, where the vertex feature is used to represent a feature of a vertex of the surface mesh, the vertex normal vector is used to represent a normal vector of a vertex of the surface mesh, perform feature learning on the vertex feature to obtain a first vertex rendering feature, perform reflection encoding on the vertex feature and the vertex normal vector by using a reflection encoding network to obtain a reflection feature, and model reflected light based on at least the first vertex rendering feature and the reflection feature to obtain a neural rendering model.
In the above embodiment of the present application, the modeling module is further configured to perform roughness estimation on the vertex characteristics by using the roughness estimation module to obtain vertex roughness, determine a reflection direction of the surface mesh at least based on a vertex normal vector, and encode the vertex roughness and the reflection direction by using the encoding module to obtain reflection characteristics.
In the above embodiment of the present application, the modeling module is further configured to obtain a viewpoint position corresponding to the captured image, and determine the reflection direction based on the viewpoint position and the vertex normal vector.
In the above embodiment of the present application, the modeling module is further configured to obtain a viewpoint position corresponding to the captured image, perform feature extraction on the viewpoint to obtain a viewpoint feature vector, and model the reflected light based on the first vertex rendering feature, the reflection feature and the viewpoint feature vector to obtain a neural rendering model.
In the above embodiment of the present application, the modeling module is further configured to input the first vertex rendering feature, the reflection feature, and the viewpoint feature vector to the initial rendering model, obtain an initial rendering result output by the initial rendering model, and adjust model parameters of the initial rendering model based on similarity between the initial rendering result and the captured image, to obtain the neural rendering model.
In the above embodiment of the present application, the modeling module is further configured to perform feature learning on the vertex feature by using a feature learning network, so as to obtain a first vertex rendering feature.
In the above embodiment of the present application, the modeling module is further configured to perform normal vector estimation on the vertex characteristics by using a normal vector estimation network to obtain a vertex normal vector.
In the above embodiment of the present application, the rendering module is further configured to perform feature learning on the vertex features of the surface mesh to obtain second vertex rendering features, and render the virtual object based on the second vertex rendering features by using the neural rendering model to obtain the target rendering result.
In the above embodiment of the present application, the device is further configured to read the second vertex rendering feature and the neural network model stored in the storage device, and render the virtual object based on the second vertex rendering feature by using the neural rendering model, so as to obtain a target rendering result.
In the above embodiment of the present application, the surface mesh is simplified to obtain a simplified mesh, the simplified mesh is mapped to a two-dimensional image space to obtain a two-dimensional mapping result, dense sampling is performed on the two-dimensional mapping result to obtain two-dimensional sampling points, the two-dimensional sampling points are mapped to a virtual space to obtain three-dimensional sampling points, and vertex characteristics are obtained based on the characteristics of vertices corresponding to the three-dimensional sampling points in the surface mesh.
It should be noted that, the preferred embodiment of the present application in the above examples is the same as the embodiment provided in example 1, the application scenario and the implementation process, but is not limited to the embodiment provided in example 1.
Example 6
According to an embodiment of the present application, there is further provided a virtual object rendering apparatus for implementing the virtual object rendering method, and fig. 11 is a schematic diagram of a virtual object rendering apparatus according to embodiment 6 of the present application, as shown in fig. 11, and the apparatus 1100 includes: a first display module 1102, a second display module 1104.
The first display module is used for responding to an input instruction acted on the operation interface, and displaying a shooting image on the operation interface, wherein the shooting image is obtained by shooting a physical object in a real environment; the second display module is used for responding to a rendering instruction acted on the operation interface and displaying a target rendering result on the operation interface, wherein the target rendering result is based on the vertex characteristics of a surface grid of a virtual object, the virtual object is an associated object of a physical object mapped in a virtual environment, the virtual object is rendered by using a nerve rendering model, the nerve rendering model is obtained by modeling reflected light of the surface of the physical object based on the surface grid, and the surface grid is generated based on a shooting image
Here, the first display module 1102 and the second display module 1104 correspond to steps S702 to S704 in embodiment 2, and the two modules are the same as the examples and application scenarios implemented by the corresponding steps, but are not limited to those disclosed in embodiment 1. It should be noted that the above-mentioned modules or units may be hardware components or software components stored in a memory (for example, the memory 104) and processed by one or more processors (for example, the processors 102a,102b, … …,102 n), and the above-mentioned modules may also be executed as a part of the apparatus in the computer terminal 10 provided in embodiment 1.
It should be noted that, the preferred embodiment of the present application in the above examples is the same as the embodiment provided in example 1, the application scenario and the implementation process, but is not limited to the embodiment provided in example 1.
Example 7
According to an embodiment of the present application, there is further provided a virtual object rendering apparatus for implementing the virtual object rendering method, and fig. 12 is a schematic diagram of a virtual object rendering apparatus according to embodiment 7 of the present application, as shown in fig. 12, the apparatus 1200 includes: a presentation module 1202, a generation module 1204, a modeling module 1206, a rendering module 1208, a driving module 1210.
The display module is used for displaying a shooting image on a display picture of the virtual reality VR equipment or the augmented reality AR equipment, wherein the shooting image is obtained by shooting a physical object in a real environment; the generation module is used for generating a surface grid of the virtual object based on the shot image, wherein the virtual object is an associated object of the physical object mapped in the virtual environment; the modeling module is used for modeling the reflected light of the surface of the solid object based on the surface grid to obtain a neural rendering model; the rendering module is used for rendering the virtual object by utilizing the neural rendering model based on the vertex characteristics of the surface mesh to obtain a target rendering result; the driving module is used for driving the VR device or the AR device to display the target rendering result.
It should be noted that, the above-mentioned display module 1202, generation module 1204, modeling module 1206, rendering module 1208, and driving module 1210 correspond to steps S802 to S810 in embodiment 3, and the five modules are the same as the examples and application scenarios implemented by the corresponding steps, but are not limited to those disclosed in embodiment 1. It should be noted that the above-mentioned modules or units may be hardware components or software components stored in a memory (for example, the memory 104) and processed by one or more processors (for example, the processors 102a,102b, … …,102 n), and the above-mentioned modules may also be executed as a part of the apparatus in the computer terminal 10 provided in embodiment 1.
It should be noted that, the preferred embodiment of the present application in the above examples is the same as the embodiment provided in example 1, the application scenario and the implementation process, but is not limited to the embodiment provided in example 1.
Example 8
According to an embodiment of the present application, there is further provided a virtual object rendering apparatus for implementing the virtual object rendering method, and fig. 13 is a schematic diagram of a virtual object rendering apparatus according to embodiment 8 of the present application, as shown in fig. 13, the apparatus 1300 includes: an acquisition module 1302, a generation module 1304, a modeling module 1306, a rendering module 1308, an output module 1310.
The acquisition module is used for acquiring a shooting image by calling a first interface, wherein the first interface comprises a first parameter, the parameter value of the first parameter is the shooting image, and the shooting image is obtained by shooting a physical object in a real environment; the generation module is used for generating a surface grid of the virtual object based on the shot image, wherein the virtual object is an associated object of the physical object mapped in the virtual environment; the modeling module is used for modeling the reflected light of the surface of the solid object based on the surface grid to obtain a neural rendering model; the rendering module is used for rendering the virtual object by utilizing the neural rendering model based on the vertex characteristics of the surface mesh to obtain a target rendering result; the output module is used for outputting a target rendering result by calling a second interface, wherein the second interface comprises a second parameter, and the parameter value of the second parameter is the target rendering result.
It should be noted that, the above-mentioned obtaining module 1302, generating module 1304, modeling module 1306, rendering module 1308, and output module 1310 correspond to steps S902 to S910 in embodiment 4, and the five modules are the same as the examples and application scenarios implemented by the corresponding steps, but are not limited to those disclosed in embodiment 1. It should be noted that the above-mentioned modules or units may be hardware components or software components stored in a memory (for example, the memory 104) and processed by one or more processors (for example, the processors 102a,102b, … …,102 n), and the above-mentioned modules may also be executed as a part of the apparatus in the computer terminal 10 provided in embodiment 1.
It should be noted that, the preferred embodiment of the present application in the above examples is the same as the embodiment provided in example 1, the application scenario and the implementation process, but is not limited to the embodiment provided in example 1.
Example 9
Embodiments of the present application may provide an electronic device, which may be an AR/VR device, which may be any one of a group of AR/VR devices. Alternatively, in this embodiment, the AR/VR device may be replaced by a terminal device such as a mobile terminal.
Alternatively, in this embodiment, the AR/VR device may be located in at least one network device among a plurality of network devices of the computer network.
In this embodiment, the above-mentioned AR/VR device may execute the program codes of the following steps in the virtual object rendering method: acquiring a shooting image obtained by shooting a physical object in a real environment; generating a surface grid of a virtual object based on the photographed image, wherein the virtual object is an associated object of the physical object mapped in the virtual environment; modeling reflected light of the surface of the solid object based on the surface grid to obtain a neural rendering model; and rendering the virtual object by utilizing the neural rendering model based on the vertex characteristics of the surface mesh to obtain a target rendering result.
Alternatively, fig. 14 is a block diagram of a computer terminal according to an embodiment of the present application. As shown in fig. 14, the computer terminal a may include: one or more (only one is shown) processors 102, memory 104, memory controller, and peripheral interfaces, where the peripheral interfaces are connected to the radio frequency module, audio module, and display.
The memory may be used to store software programs and modules, such as program instructions/modules corresponding to the virtual object rendering method and apparatus in the embodiments of the present application, and the processor executes the software programs and modules stored in the memory, thereby executing various functional applications and data processing, that is, implementing the virtual object rendering method described above. The memory may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory may further include memory remotely located with respect to the processor, which may be connected to terminal a through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The processor may call the information and the application program stored in the memory through the transmission device to perform the following steps: acquiring a shooting image obtained by shooting a physical object in a real environment; generating a surface grid of a virtual object based on the photographed image, wherein the virtual object is an associated object of the physical object mapped in the virtual environment; modeling reflected light of the surface of the solid object based on the surface grid to obtain a neural rendering model; and rendering the virtual object by utilizing the neural rendering model based on the vertex characteristics of the surface mesh to obtain a target rendering result.
Optionally, the above processor may further execute program code for: obtaining vertex characteristics and vertex normal vectors of the surface grid, wherein the vertex characteristics are used for representing the characteristics of the vertices of the surface grid, and the vertex normal vectors are used for representing the normal vectors of the vertices of the surface grid; feature learning is carried out on the vertex features to obtain first vertex rendering features; carrying out reflection coding on the vertex characteristics and vertex normal vectors by utilizing a reflection coding network to obtain reflection characteristics; and modeling the reflected light based on at least the first vertex rendering features and the reflection features to obtain a neural rendering model.
Optionally, the above processor may further execute program code for: performing roughness estimation on the vertex characteristics by using a roughness estimation module to obtain vertex roughness; determining a reflection direction of the surface mesh based at least on the vertex normal vector; and encoding the vertex roughness and the reflection direction by using an encoding module to obtain reflection characteristics.
Optionally, the above processor may further execute program code for: acquiring a viewpoint position corresponding to a shot image; the reflection direction is determined based on the viewpoint position and the vertex normal vector.
Optionally, the above processor may further execute program code for: acquiring a viewpoint position corresponding to a shot image; extracting the characteristics of the viewpoint positions to obtain viewpoint characteristic vectors; and modeling the reflected light based on the first vertex rendering feature, the reflection feature and the viewpoint feature vector to obtain a neural rendering model.
Optionally, the above processor may further execute program code for: inputting the first vertex rendering feature, the reflection feature and the viewpoint feature vector into an initial rendering model, and obtaining an initial rendering result output by the initial rendering model; and adjusting model parameters of the initial rendering model based on the similarity between the initial rendering result and the photographed image to obtain a neural rendering model.
Optionally, the above processor may further execute program code for: and performing feature learning on the vertex features by using a feature learning network to obtain first vertex rendering features.
Optionally, the above processor may further execute program code for: and carrying out normal vector estimation on the vertex characteristics by using a normal vector estimation network to obtain vertex normal vectors.
Optionally, the above processor may further execute program code for: feature learning is carried out on the vertex features of the surface mesh to obtain second vertex rendering features; and rendering the virtual object based on the second vertex rendering characteristics by using the neural rendering model to obtain a target rendering result.
Optionally, the above processor may further execute program code for: reading the second vertex rendering characteristics and the neural network model stored in the storage device; and rendering the virtual object based on the second vertex rendering characteristics by using the neural rendering model to obtain a target rendering result.
Optionally, the above processor may further execute program code for: simplifying the surface grid to obtain a simplified grid; mapping the simplified grid to a two-dimensional image space to obtain a two-dimensional mapping result; dense sampling is carried out on the two-dimensional mapping result, and two-dimensional sampling points are obtained; mapping the two-dimensional sampling points to a virtual space to obtain three-dimensional sampling points; and obtaining vertex characteristics based on the characteristics of the vertices corresponding to the three-dimensional sampling points in the surface grid.
The processor may call the information and the application program stored in the memory through the transmission device to perform the following steps: responding to an input instruction acted on an operation interface, and displaying a shooting image on the operation interface, wherein the shooting image is obtained by shooting a physical object in a real environment; and responding to a rendering instruction acting on the operation interface, and displaying a target rendering result on the operation interface, wherein the target rendering result is based on the vertex characteristics of a surface grid of a virtual object, the virtual object is an associated object of the physical object mapped in the virtual environment, the virtual object is rendered by utilizing a nerve rendering model, the nerve rendering model is obtained by modeling reflected light of the surface of the physical object on the basis of the surface grid, and the surface grid is generated on the basis of a shooting image.
The processor may call the information and the application program stored in the memory through the transmission device to perform the following steps: displaying a shooting image on a presentation picture of Virtual Reality (VR) equipment or Augmented Reality (AR) equipment, wherein the shooting image is obtained by shooting a physical object in a real environment; generating a surface grid of a virtual object based on the photographed image, wherein the virtual object is an associated object of the physical object mapped in the virtual environment; modeling reflected light of the surface of the solid object based on the surface grid to obtain a neural rendering model; rendering the virtual object by utilizing the neural rendering model based on the vertex characteristics of the surface mesh to obtain a target rendering result; and driving the VR device or the AR device to display the target rendering result.
The processor may call the information and the application program stored in the memory through the transmission device to perform the following steps: acquiring a shooting image by calling a first interface, wherein the first interface comprises a first parameter, the parameter value of the first parameter is the shooting image, and the shooting image is obtained by shooting a physical object in a real environment; generating a surface grid of a virtual object based on the photographed image, wherein the virtual object is an associated object of the physical object mapped in the virtual environment; modeling reflected light of the surface of the solid object based on the surface grid to obtain a neural rendering model; rendering the virtual object by utilizing the neural rendering model based on the vertex characteristics of the surface mesh to obtain a target rendering result; and outputting a target rendering result by calling a second interface, wherein the second interface comprises a second parameter, and the parameter value of the second parameter is the target rendering result.
By adopting the embodiment of the application, the shooting image obtained by shooting the physical object in the real environment is obtained; generating a surface grid of a virtual object based on the photographed image, wherein the virtual object is an associated object of the physical object mapped in the virtual environment; modeling reflected light of the surface of the solid object based on the surface grid to obtain a neural rendering model; based on the vertex characteristics of the surface mesh, the virtual object is rendered by utilizing the neural rendering model, a target rendering result is obtained, and the rendering effect of rendering the virtual object is improved. It is easy to note that the reflected light of the surface of the physical object can be modeled by combining the surface grid of the virtual object, so that a neural rendering model of the reflected light component is obtained, the sense of reality of the reflected light rendering can be effectively improved, the rendering effect of rendering the virtual object can be improved in the subsequent rendering process, and the technical problem of poor rendering effect of rendering the virtual object in the related art is solved.
It will be appreciated by those skilled in the art that the configuration shown in fig. 14 is only illustrative, and the computer terminal may be a smart phone (such as an Android phone, an iOS phone, etc.), a tablet computer, a palm computer, a mobile internet device (Mobile Internet Devices, MID), a PAD, etc. Fig. 14 does not limit the structure of the electronic device. For example, the computer terminal a may also include more or fewer components (such as a network interface, a display device, etc.) than shown in fig. 14, or have a different configuration than shown in fig. 14.
Those of ordinary skill in the art will appreciate that all or part of the steps in the various methods of the above embodiments may be implemented by a program for instructing a terminal device to execute in association with hardware, the program may be stored in a computer readable storage medium, and the storage medium may include: flash disk, read-Only Memory (ROM), random-access Memory (Random Access Memory, RAM), magnetic or optical disk, and the like.
Example 10
Embodiments of the present application also provide a computer-readable storage medium. Alternatively, in this embodiment, the computer-readable storage medium may be used to store program code executed by the virtual object rendering method provided in embodiment 1.
Alternatively, in this embodiment, the above-mentioned computer readable storage medium may be located in any one of the AR/VR device terminals in the AR/VR device network or in any one of the mobile terminals in the mobile terminal group.
Optionally, in the present embodiment, the computer readable storage medium is configured to store program code for performing the steps of: acquiring a shooting image obtained by shooting a physical object in a real environment; generating a surface grid of a virtual object based on the photographed image, wherein the virtual object is an associated object of the physical object mapped in the virtual environment; modeling reflected light of the surface of the solid object based on the surface grid to obtain a neural rendering model; and rendering the virtual object by utilizing the neural rendering model based on the vertex characteristics of the surface mesh to obtain a target rendering result.
Optionally, the above-mentioned storage medium is further configured to store program code for performing the steps of: obtaining vertex characteristics and vertex normal vectors of the surface grid, wherein the vertex characteristics are used for representing the characteristics of the vertices of the surface grid, and the vertex normal vectors are used for representing the normal vectors of the vertices of the surface grid; feature learning is carried out on the vertex features to obtain first vertex rendering features; carrying out reflection coding on the vertex characteristics and vertex normal vectors by utilizing a reflection coding network to obtain reflection characteristics; and modeling the reflected light based on at least the first vertex rendering features and the reflection features to obtain a neural rendering model.
Optionally, the above-mentioned storage medium is further configured to store program code for performing the steps of: performing roughness estimation on the vertex characteristics by using a roughness estimation module to obtain vertex roughness; determining a reflection direction of the surface mesh based at least on the vertex normal vector; and encoding the vertex roughness and the reflection direction by using an encoding module to obtain reflection characteristics.
Optionally, the above-mentioned storage medium is further configured to store program code for performing the steps of: acquiring a viewpoint position corresponding to a shot image; the reflection direction is determined based on the viewpoint position and the vertex normal vector.
Optionally, the above-mentioned storage medium is further configured to store program code for performing the steps of: acquiring a viewpoint position corresponding to a shot image; extracting the characteristics of the viewpoint positions to obtain viewpoint characteristic vectors; and modeling the reflected light based on the first vertex rendering feature, the reflection feature and the viewpoint feature vector to obtain a neural rendering model.
Optionally, the above-mentioned storage medium is further configured to store program code for performing the steps of: inputting the first vertex rendering feature, the reflection feature and the viewpoint feature vector into an initial rendering model, and obtaining an initial rendering result output by the initial rendering model; and adjusting model parameters of the initial rendering model based on the similarity between the initial rendering result and the photographed image to obtain a neural rendering model.
Optionally, the above-mentioned storage medium is further configured to store program code for performing the steps of: and performing feature learning on the vertex features by using a feature learning network to obtain first vertex rendering features.
Optionally, the above-mentioned storage medium is further configured to store program code for performing the steps of: and carrying out normal vector estimation on the vertex characteristics by using a normal vector estimation network to obtain vertex normal vectors.
Optionally, the above-mentioned storage medium is further configured to store program code for performing the steps of: feature learning is carried out on the vertex features of the surface mesh to obtain second vertex rendering features; and rendering the virtual object based on the second vertex rendering characteristics by using the neural rendering model to obtain a target rendering result.
Optionally, the above-mentioned storage medium is further configured to store program code for performing the steps of: reading the second vertex rendering characteristics and the neural network model stored in the storage device; and rendering the virtual object based on the second vertex rendering characteristics by using the neural rendering model to obtain a target rendering result.
Optionally, the above-mentioned storage medium is further configured to store program code for performing the steps of: simplifying the surface grid to obtain a simplified grid; mapping the simplified grid to a two-dimensional image space to obtain a two-dimensional mapping result; dense sampling is carried out on the two-dimensional mapping result, and two-dimensional sampling points are obtained; mapping the two-dimensional sampling points to a virtual space to obtain three-dimensional sampling points; and obtaining vertex characteristics based on the characteristics of the vertices corresponding to the three-dimensional sampling points in the surface grid.
Optionally, in the present embodiment, the computer readable storage medium is configured to store program code for performing the steps of: responding to an input instruction acted on an operation interface, and displaying a shooting image on the operation interface, wherein the shooting image is obtained by shooting a physical object in a real environment; and responding to a rendering instruction acting on the operation interface, and displaying a target rendering result on the operation interface, wherein the target rendering result is based on the vertex characteristics of a surface grid of a virtual object, the virtual object is an associated object of the physical object mapped in the virtual environment, the virtual object is rendered by utilizing a nerve rendering model, the nerve rendering model is obtained by modeling reflected light of the surface of the physical object on the basis of the surface grid, and the surface grid is generated on the basis of a shooting image.
Optionally, in the present embodiment, the computer readable storage medium is configured to store program code for performing the steps of: displaying a shooting image on a presentation picture of Virtual Reality (VR) equipment or Augmented Reality (AR) equipment, wherein the shooting image is obtained by shooting a physical object in a real environment; generating a surface grid of a virtual object based on the photographed image, wherein the virtual object is an associated object of the physical object mapped in the virtual environment; modeling reflected light of the surface of the solid object based on the surface grid to obtain a neural rendering model; rendering the virtual object by utilizing the neural rendering model based on the vertex characteristics of the surface mesh to obtain a target rendering result; and driving the VR device or the AR device to display the target rendering result.
Optionally, in the present embodiment, the computer readable storage medium is configured to store program code for performing the steps of: acquiring a shooting image by calling a first interface, wherein the first interface comprises a first parameter, the parameter value of the first parameter is the shooting image, and the shooting image is obtained by shooting a physical object in a real environment; generating a surface grid of a virtual object based on the photographed image, wherein the virtual object is an associated object of the physical object mapped in the virtual environment; modeling reflected light of the surface of the solid object based on the surface grid to obtain a neural rendering model; rendering the virtual object by utilizing the neural rendering model based on the vertex characteristics of the surface mesh to obtain a target rendering result; and outputting a target rendering result by calling a second interface, wherein the second interface comprises a second parameter, and the parameter value of the second parameter is the target rendering result.
By adopting the embodiment of the application, the shooting image obtained by shooting the physical object in the real environment is obtained; generating a surface grid of a virtual object based on the photographed image, wherein the virtual object is an associated object of the physical object mapped in the virtual environment; modeling reflected light of the surface of the solid object based on the surface grid to obtain a neural rendering model; based on the vertex characteristics of the surface mesh, the virtual object is rendered by utilizing the neural rendering model, a target rendering result is obtained, and the rendering effect of rendering the virtual object is improved. It is easy to note that the reflected light of the surface of the physical object can be modeled by combining the surface grid of the virtual object, so that a neural rendering model of the reflected light component is obtained, the sense of reality of the reflected light rendering can be effectively improved, the rendering effect of rendering the virtual object can be improved in the subsequent rendering process, and the technical problem of poor rendering effect of rendering the virtual object in the related art is solved.
The foregoing embodiment numbers of the present application are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
In the foregoing embodiments of the present application, the descriptions of the embodiments are emphasized, and for a portion of this disclosure that is not described in detail in this embodiment, reference is made to the related descriptions of other embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed technology may be implemented in other manners. The above-described embodiments of the apparatus are merely exemplary, and the division of the units, such as the division of the units, is merely a logical function division, and may be implemented in another manner, for example, multiple units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some interfaces, units or modules, or may be in electrical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied essentially or in part or all of the technical solution or in part in the form of a software product stored in a storage medium, including instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely a preferred embodiment of the present application and it should be noted that modifications and adaptations to those skilled in the art may be made without departing from the principles of the present application, which are intended to be comprehended within the scope of the present application.

Claims (14)

1. A method of rendering a virtual object, comprising:
acquiring a shooting image obtained by shooting a physical object in a real environment;
generating a surface grid of a virtual object based on the photographed image, wherein the virtual object is an associated object of the physical object mapped in a virtual environment;
modeling reflected light of the surface of the solid object based on the surface grid to obtain a neural rendering model;
and rendering the virtual object by using the nerve rendering model based on the vertex characteristics of the surface mesh to obtain a target rendering result.
2. The method of claim 1, wherein modeling the reflected light of the surface of the solid object based on the surface mesh results in a neural rendering model, comprising:
obtaining vertex characteristics and vertex normal vectors of the surface grid, wherein the vertex characteristics are used for representing the characteristics of the vertices of the surface grid, and the vertex normal vectors are used for representing normal vectors of the vertices of the surface grid;
Performing feature learning on the vertex features to obtain first vertex rendering features;
performing reflection coding on the vertex characteristics and the vertex normal vector by using a reflection coding network to obtain reflection characteristics;
modeling the reflected light based at least on the first vertex rendering features and the reflection features, resulting in the neural rendering model.
3. The method of claim 2, wherein the reflective coding network comprises: the roughness estimation module and the coding module are used for carrying out reflection coding on the vertex characteristics and the vertex normal vector by utilizing a reflection coding network to obtain reflection characteristics, and the roughness estimation module comprises the following steps:
performing roughness estimation on the vertex characteristics by using the roughness estimation module to obtain vertex roughness;
determining a reflection direction of the surface mesh based at least on the vertex normal vector;
and encoding the vertex roughness and the reflection direction by using the encoding module to obtain the reflection characteristics.
4. A method according to claim 3, wherein determining the reflection direction of the surface mesh based at least on the vertex normal vector comprises:
acquiring a viewpoint position corresponding to the photographed image;
The reflection direction is determined based on the viewpoint position and the vertex normal vector.
5. The method of claim 2, wherein modeling the reflected light based at least on the first vertex rendering features and the reflection features results in the neural rendering model, comprising:
acquiring a viewpoint position corresponding to the photographed image;
extracting the characteristics of the viewpoint positions to obtain viewpoint characteristic vectors;
and modeling the reflected light based on the first vertex rendering feature, the reflection feature and the viewpoint feature vector to obtain the neural rendering model.
6. The method of claim 5, wherein modeling the reflected light based on the first vertex rendering features, the reflection features, and the viewpoint feature vectors results in the neural rendering model, comprising:
inputting the first vertex rendering feature, the reflection feature and the viewpoint feature vector into an initial rendering model, and obtaining an initial rendering result output by the initial rendering model;
and adjusting model parameters of the initial rendering model based on the similarity between the initial rendering result and the photographed image to obtain the neural rendering model.
7. The method of claim 2, wherein performing feature learning on the vertex features to obtain first vertex rendering features comprises:
and performing feature learning on the vertex features by using a feature learning network to obtain the first vertex rendering features.
8. The method of claim 1, wherein rendering the virtual object using the neural rendering model based on the vertex characteristics of the surface mesh to obtain a target rendering result comprises:
performing feature learning on the vertex features of the surface mesh to obtain second vertex rendering features;
and rendering the virtual object based on the second vertex rendering feature by using the nerve rendering model to obtain the target rendering result.
9. The method of claim 8, wherein the method further comprises:
simplifying the surface grid to obtain a simplified grid;
mapping the simplified grid to a two-dimensional image space to obtain a two-dimensional mapping result;
performing dense sampling on the two-dimensional mapping result to obtain two-dimensional sampling points;
mapping the two-dimensional sampling points to a virtual space to obtain three-dimensional sampling points;
And obtaining the vertex characteristics based on the characteristics of the vertices corresponding to the three-dimensional sampling points in the surface grid.
10. A method of rendering a virtual object, comprising:
responding to an input instruction acted on an operation interface, and displaying a shooting image on the operation interface, wherein the shooting image is obtained by shooting a physical object in a real environment;
and responding to a rendering instruction acting on the operation interface, and displaying a target rendering result on the operation interface, wherein the target rendering result is based on vertex characteristics of a surface grid of a virtual object, the virtual object is an associated object of the physical object mapped in a virtual environment, the virtual object is rendered by using a nerve rendering model, the nerve rendering model is obtained by modeling reflected light of the surface of the physical object based on the surface grid, and the surface grid is generated based on the shooting image.
11. A method of rendering a virtual object, comprising:
displaying a shooting image on a presentation picture of Virtual Reality (VR) equipment or Augmented Reality (AR) equipment, wherein the shooting image is obtained by shooting a physical object in a real environment;
Generating a surface grid of a virtual object based on the photographed image, wherein the virtual object is an associated object of the physical object mapped in a virtual environment;
modeling reflected light of the surface of the solid object based on the surface grid to obtain a neural rendering model;
rendering the virtual object by using the neural rendering model based on the vertex characteristics of the surface mesh to obtain a target rendering result;
and driving the VR equipment or the AR equipment to display the target rendering result.
12. A method of rendering a virtual object, comprising:
acquiring a shooting image by calling a first interface, wherein the first interface comprises a first parameter, the parameter value of the first parameter is the shooting image, and the shooting image is obtained by shooting a physical object in a real environment;
generating a surface grid of a virtual object based on the photographed image, wherein the virtual object is an associated object of the physical object mapped in a virtual environment;
modeling reflected light of the surface of the solid object based on the surface grid to obtain a neural rendering model;
rendering the virtual object by using the neural rendering model based on the vertex characteristics of the surface mesh to obtain a target rendering result;
And outputting the target rendering result by calling a second interface, wherein the second interface comprises a second parameter, and the parameter value of the second parameter is the target rendering result.
13. An electronic device, comprising:
a memory storing an executable program;
a processor for executing the program, wherein the program when run performs the method of any of claims 1 to 12.
14. A computer readable storage medium, characterized in that the computer readable storage medium comprises a stored executable program, wherein the executable program when run controls a device in which the computer readable storage medium is located to perform the method of any one of claims 1 to 12.
CN202310507011.8A 2023-05-04 2023-05-04 Virtual object rendering method, electronic device and storage medium Pending CN116797708A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310507011.8A CN116797708A (en) 2023-05-04 2023-05-04 Virtual object rendering method, electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310507011.8A CN116797708A (en) 2023-05-04 2023-05-04 Virtual object rendering method, electronic device and storage medium

Publications (1)

Publication Number Publication Date
CN116797708A true CN116797708A (en) 2023-09-22

Family

ID=88037013

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310507011.8A Pending CN116797708A (en) 2023-05-04 2023-05-04 Virtual object rendering method, electronic device and storage medium

Country Status (1)

Country Link
CN (1) CN116797708A (en)

Similar Documents

Publication Publication Date Title
Li et al. Monocular real-time volumetric performance capture
CN111901598B (en) Video decoding and encoding method, device, medium and electronic equipment
CN116188689A (en) Radiation field processing method, storage medium and computer terminal
CN110458924B (en) Three-dimensional face model establishing method and device and electronic equipment
CN115375828B (en) Model shadow generation method, device, equipment and medium
CN115115805A (en) Training method, device and equipment for three-dimensional reconstruction model and storage medium
CN113220251A (en) Object display method, device, electronic equipment and storage medium
US11451758B1 (en) Systems, methods, and media for colorizing grayscale images
US11961266B2 (en) Multiview neural human prediction using implicit differentiable renderer for facial expression, body pose shape and clothes performance capture
CN115272565A (en) Head three-dimensional model reconstruction method and electronic equipment
Alexiou et al. Subjective and objective quality assessment for volumetric video
Nguyen-Ha et al. Free-viewpoint rgb-d human performance capture and rendering
CN117252984A (en) Three-dimensional model generation method, device, apparatus, storage medium, and program product
CN115713585B (en) Texture image reconstruction method, apparatus, computer device and storage medium
CN116485983A (en) Texture generation method of virtual object, electronic device and storage medium
CN116630485A (en) Virtual image driving method, virtual image rendering method and electronic device
CN115222917A (en) Training method, device and equipment for three-dimensional reconstruction model and storage medium
CN116797708A (en) Virtual object rendering method, electronic device and storage medium
CN116188698B (en) Object processing method and electronic equipment
CN113592990A (en) Three-dimensional effect generation method, device, equipment and medium for two-dimensional image
CN114612510B (en) Image processing method, apparatus, device, storage medium, and computer program product
CN116071551A (en) Image processing method, computer-readable storage medium, and electronic device
CN117173314B (en) Image processing method, device, equipment, medium and program product
CN116645468B (en) Human body three-dimensional modeling method, method and device for training human body structure to generate model
CN116681818B (en) New view angle reconstruction method, training method and device of new view angle reconstruction network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination