CN110503711B - Method and device for rendering virtual object in augmented reality - Google Patents

Method and device for rendering virtual object in augmented reality Download PDF

Info

Publication number
CN110503711B
CN110503711B CN201910779793.4A CN201910779793A CN110503711B CN 110503711 B CN110503711 B CN 110503711B CN 201910779793 A CN201910779793 A CN 201910779793A CN 110503711 B CN110503711 B CN 110503711B
Authority
CN
China
Prior art keywords
virtual object
image data
virtual
data
illumination
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910779793.4A
Other languages
Chinese (zh)
Other versions
CN110503711A (en
Inventor
白雪莲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Samsung Electronics China R&D Center
Samsung Electronics Co Ltd
Original Assignee
Samsung Electronics China R&D Center
Samsung Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Samsung Electronics China R&D Center, Samsung Electronics Co Ltd filed Critical Samsung Electronics China R&D Center
Priority to CN201910779793.4A priority Critical patent/CN110503711B/en
Publication of CN110503711A publication Critical patent/CN110503711A/en
Application granted granted Critical
Publication of CN110503711B publication Critical patent/CN110503711B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/506Illumination models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality

Abstract

The invention provides a method and a device for rendering a virtual object in augmented reality, wherein the method comprises the following steps: s1: acquiring image data of a real scene; s2: identifying image data of a reference object from the acquired image data of the real scene according to the relevant information of the virtual object in the virtual scene; s3: and migrating the illumination data of the image data of the reference object into the image data of the virtual object by using an artificial intelligence algorithm to obtain the target virtual object. The method and the system transfer the illumination of the reference object in the real scene to the virtual object in the virtual scene based on the artificial intelligence algorithm, so that the virtual object and the real scene are more truly integrated.

Description

Method and device for rendering virtual object in augmented reality
Technical Field
The present invention relates to the field of augmented reality technologies, and in particular, to a method and an apparatus for rendering a virtual object in augmented reality.
Background
Augmented Reality (AR) is a new technology for seamlessly integrating real world information and virtual world information, and the technology implements loading and interaction of a virtual model into the real world by calculating the position and angle of a camera image in real time and adding a corresponding virtual model. However, the problem of rendering the virtual model in augmented reality has been a key to influence the visual experience, and especially, the problem of rendering the virtual model by light has the greatest influence on the visual experience.
In the prior art, the illumination rendering of a virtual model in augmented reality is performed by a manually set virtual light source, and the rendered virtual model is displayed with a sharp feeling and cannot be well fused into the real world to meet the requirements of users. In addition, the illumination rendering of the virtual model can be solved by light probe, illumination mapping and other traditional methods, but these methods are generally limited by scenes and equipment, cannot be used in a wide range, and have their use limitations.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides a method and a device for rendering a virtual object in augmented reality, wherein the illumination of a reference object in a real scene is transferred to the virtual object in the virtual scene through an artificial intelligence algorithm, so that the virtual object and the real scene are more truly fused.
According to an aspect of the present invention, there is provided a method of rendering a virtual object in augmented reality, the method comprising the steps of: s1: acquiring image data of a real scene; s2: identifying image data of a reference object from the acquired image data of the real scene according to the relevant information of the virtual object in the virtual scene; s3: and migrating the illumination data of the image data of the reference object into the image data of the virtual object by using an artificial intelligence algorithm to obtain the target virtual object.
Preferably, the related information of the virtual object includes a material parameter of the virtual object and/or position information of the virtual object in the virtual scene.
Preferably, the step S2 includes: and identifying image data of objects of the same category similar to the virtual object in material from the acquired image data of the real scene according to the material parameters of the virtual object, or identifying image data of objects adjacent to the virtual object in position from the acquired image data of the real scene according to the position information of the virtual object in the virtual scene.
Preferably, the step S3 includes: and transferring the illumination data of the image data of the same class of objects with similar materials to the virtual object into the image data of the virtual object, or transferring the illumination data of the image data of the object adjacent to the position of the virtual object into the image data of the virtual object.
Preferably, the step of transferring the illumination data of the image data of the objects of the same category similar to the virtual object in material to the image data of the virtual object includes: primarily rendering a 3D model of a virtual object in a virtual scene; transferring the illumination data of the objects of the same category to the preliminarily rendered virtual object based on an artificial intelligence algorithm to obtain image data of the virtual object with illumination; and performing secondary rendering on the image data of the virtual object with illumination to obtain a target virtual object.
Preferably, the step of migrating the illumination data of the image data of the object adjacent to the virtual object position into the image data of the virtual object comprises: primarily rendering a 3D model of a virtual object in a virtual scene; and transferring the illumination data and the shadow data of the object adjacent to the virtual object position to the preliminarily rendered virtual object based on an artificial intelligence algorithm to obtain a target virtual object.
Preferably, the step of preliminarily rendering the virtual objects in the virtual scene includes: the virtual object is rendered using ambient light according to the pose data of the current user to obtain image data of the preliminarily rendered virtual object for the current perspective of the user.
Preferably, the step of rendering the image data of the virtual object with illumination twice to obtain the target virtual object includes: and performing shadow recognition on the image data of the virtual object with illumination to acquire the illumination direction of the virtual object, and rendering a shadow image of the virtual object according to the acquired illumination direction of the virtual object and the plane position where the virtual object is placed.
According to another aspect of the present invention, there is provided an apparatus for rendering a virtual object in augmented reality, the apparatus including: a scene data module configured to acquire image data of a real scene; the object identification module is configured to identify image data of a reference object from the acquired image data of the real scene according to relevant information of a virtual object in the virtual scene; a data processing module configured to migrate the illumination data of the image data of the reference object into the image data of the virtual object using an artificial intelligence algorithm to obtain a target virtual object.
Preferably, the related information of the virtual object includes material parameters of the virtual object and/or position information of the virtual object in the virtual scene.
Preferably, the object identification module is configured to: and identifying image data of objects of the same category similar to the virtual object in material from the acquired image data of the real scene according to the material parameters of the virtual object, or identifying image data of objects adjacent to the virtual object in position from the acquired image data of the real scene according to the position information of the virtual object in the virtual scene.
Preferably, the data processing module is configured to: and transferring the illumination data of the image data of the same-class object with the similar material as the virtual object into the image data of the virtual object, or transferring the illumination data of the image data of the object adjacent to the position of the virtual object into the image data of the virtual object.
Preferably, when performing illumination migration on image data of objects of the same category similar to the virtual object, the data processing module is configured to: primarily rendering a 3D model of a virtual object in a virtual scene; transferring the illumination data of the objects of the same category to the preliminarily rendered virtual object based on an artificial intelligence algorithm to obtain image data of the virtual object with illumination; and performing secondary rendering on the image data of the virtual object with illumination to obtain a target virtual object.
Preferably, when performing the illumination migration on the image data of the object adjacent to the virtual object, the data processing module is configured to: primarily rendering a 3D model of a virtual object in a virtual scene; and transferring the illumination data and the shadow data of the object adjacent to the virtual object position to the preliminarily rendered virtual object based on an artificial intelligence algorithm to obtain a target virtual object.
Preferably, the data processing module is further configured to: the virtual object is rendered using ambient light according to the pose data of the current user to obtain image data of the preliminarily rendered virtual object for the current perspective of the user.
Preferably, the secondary rendering is to perform shadow recognition on the image data of the virtual object with illumination to obtain an illumination direction of the virtual object, and render a shadow image of the virtual object according to the illumination direction of the virtual object and the placed plane position.
According to another aspect of the invention, there is provided a computer readable storage medium storing a computer program which, when executed by a processor, performs a method of rendering virtual objects in augmented reality as described above.
According to another aspect of the invention, there is provided a computer device comprising a processor and a memory storing a computer program which, when executed by the processor, performs the method of rendering virtual objects in augmented reality as described above.
Drawings
The above features and other objects, features and advantages of the present invention will become more apparent from the following detailed description of the present invention when taken in conjunction with the accompanying drawings, in which:
FIG. 1 shows a flow diagram of a method of rendering virtual objects in augmented reality, according to an embodiment of the invention;
FIG. 2 illustrates a block diagram of an apparatus for rendering virtual objects in augmented reality, according to an embodiment of the invention.
In the drawings, like reference numerals will be understood to refer to like elements, features and structures.
Detailed Description
Hereinafter, embodiments of the present invention will be described in detail with reference to the accompanying drawings. Wherein like reference numerals refer to like parts throughout.
FIG. 1 is a flow diagram illustrating a method of rendering virtual objects in augmented reality according to an embodiment of the present invention.
As shown in fig. 1, in step S1, image data of a real scene is acquired. According to an embodiment of the present invention, in particular, image data of a real scene is acquired by a shooting device such as a camera, a video camera, or the like and the shot image data is stored, wherein the acquired image data includes picture data and/or video data.
Next, in step S2, image data of a reference object is identified from the acquired image data of the real scene according to the relevant information of the virtual object in the virtual scene. Specifically, the related information of the virtual object includes material parameters of the virtual object and/or position information of the virtual object in the virtual scene, and therefore, image data of objects of the same category having similar material to the virtual object can be identified from the acquired image data of the real scene according to the material parameters of the virtual object, or image data of objects adjacent to the position of the virtual object can be identified from the acquired image data of the real scene according to the position information of the virtual object in the virtual scene. The material parameters of the virtual object comprise specific illumination parameters of the object such as high light intensity, high light area, self-luminous coefficient, reflectivity, refractive index, glossiness and roughness, the similarity of the material refers to the condition that the material parameters are the same as or slightly different from those of the virtual object, and the position information of the virtual object in the virtual scene is three-dimensional coordinate information of the virtual object. It should be understood that the above examples of the material parameters for the virtual object are only illustrative examples, and the kind of the material parameters that can be used in the present invention is not limited thereto.
According to the embodiment of the present invention, assuming that the image data of the real scene acquired in step S1 is a picture T, the image data of the reference object in the picture T is identified according to the material parameters or the position information of the virtual object in the virtual scene. When the image data of the reference object is identified according to the material parameters of the virtual object in the virtual scene, if the material parameters of the virtual object are the highlight parameters of the PBR material system, the same class of objects (hereinafter, referred to as a reference object a) whose material parameters are the highlight parameters of the PBR material system or the highlight parameters of the Vray material system similar to the highlight parameters of the PBR material are identified in the picture T, and the image data of the same class of objects is used as the image data of the reference object. Here, the identification of the objects of the same category may be performed by means of feature point extraction or artificial intelligence algorithm. When the image data of the reference object is identified based on the position information of the virtual object in the virtual scene, only an object (hereinafter, referred to as a reference object B) near the position where the virtual object is placed may be identified, and the material parameter of the identified reference object may be identified after the reference object is identified. Here, an object near the position where the virtual object is placed refers to an object in the image data of the real scene that is adjacent to the virtual object and on the same plane. It should be understood that the above examples of the identification method are merely illustrative examples, and the identification method employable by the present invention is not limited thereto.
In step S3, the artificial intelligence algorithm is used to perform illumination migration on the image data of the reference object to obtain the target virtual object. Specifically, an artificial intelligence algorithm may be used to perform illumination migration on image data of objects of the same category that are similar in material to the virtual object or perform illumination migration on image data of objects adjacent to the virtual object to obtain the target virtual object. Here, common Artificial intelligence algorithms may include an Artificial Neural Network (Artificial Neural Network) type algorithm, a bayesian type algorithm, a Decision Tree (Decision Tree) type algorithm, a Linear Classifier (Linear Classifier) type algorithm, and the like. When the artificial intelligence algorithm is used for carrying out illumination migration on the image data of the same kind of objects similar to the virtual objects in material, the 3D model of the virtual objects in the virtual scene is firstly subjected to primary rendering, then the illumination data of the same kind of objects are migrated to the virtual objects subjected to primary rendering based on the artificial intelligence algorithm model so as to obtain the image data of the virtual objects with illumination, and then the image data of the virtual objects with illumination are subjected to secondary rendering so as to obtain the target virtual objects. When the artificial intelligence algorithm is used for carrying out illumination migration on the image data of the object adjacent to the position of the virtual object, after the 3D model of the virtual object in the virtual scene is preliminarily rendered, the illumination data and the shadow data of the object adjacent to the position of the virtual object can be directly migrated to the preliminarily rendered virtual object based on the artificial intelligence algorithm model to obtain the target virtual object. Here, the preliminary rendering refers to rendering the virtual object using the ambient light according to the gesture data of the current user to obtain the image data of the virtual object at the current viewing angle of the user, where the gesture data of the user includes the position relationship between the user and the virtual object, the three-dimensional coordinates of the user, the gesture of the user, and other motion data.
According to the embodiment of the present invention, when performing illumination migration on image data of an object of the same type having a similar material as a virtual object, as in the above example, the image data of the reference object a and the image data of the preliminarily rendered virtual object may be used as inputs of a pre-trained convolutional neural network model, and then an output of the convolutional neural network model (i.e., a result of the illumination migration) is the image data of the virtual object with illumination, and then the virtual object with illumination is secondarily rendered to obtain a target virtual object. Here, the convolutional neural network may implement a process of transferring illumination of the illuminated object G onto the non-illuminated object F and outputting the object F, wherein the illuminated object G and the non-illuminated object F may be similar in material. The pre-trained convolutional neural network model is obtained by performing a large number of iterative trainings on the convolutional neural network, and the output of the convolutional neural network model is image data of the object F with illumination. The secondary rendering refers to performing shadow recognition on a virtual object with illumination in image data output by the convolutional neural network model to obtain an illumination direction and shadow data of the virtual object, and rendering a shadow image of the virtual object according to the illumination direction, the shadow data and a placed plane position of the virtual object. According to the embodiment of the present invention, when the image data of the object adjacent to the position of the virtual object is subjected to illumination migration, as in the above example, the image data of the reference object B, the image data of the virtual object after preliminary rendering, and the material parameters of the two objects are used as the inputs of another convolutional neural network model trained in advance, and then the output of the convolutional neural network model (i.e., the result of illumination migration) is the image data of the target virtual object. The illumination migration of the virtual object by adopting the deep convolution neural network model is to migrate the illumination data of the reference object B and the identified shadow data to the primarily rendered virtual object together, so that the image data of the target virtual object can be obtained without secondary rendering. Here, the convolutional neural network may implement a process of migrating the illumination of the illuminated object G and the shadow data of the recognized object G together onto the non-illuminated object F and outputting the object F, wherein the material of the illuminated object G and the non-illuminated object F may be different. The convolutional neural network model is obtained by performing a large number of iterative trainings on the convolutional neural network, and the output of the convolutional neural network model is image data of an object F with illumination and shadow, that is, image data of a target virtual object.
Fig. 2 is a block diagram illustrating an apparatus for rendering a virtual object in augmented reality according to an embodiment of the present invention.
As shown in fig. 2, an apparatus 200 for rendering virtual objects in augmented reality includes a scene data module 201, an object recognition module 202, and a data processing module 203. Specifically, the scene data module 201 is configured to acquire image data of a real scene, the object identification module 202 is configured to identify image data of a reference object from the acquired image data of the real scene according to related information of a virtual object in the virtual scene, and the data processing module 203 is configured to perform illumination migration on the image data of the reference object by using an artificial intelligence algorithm to obtain a target virtual object.
The scene data module 201 is configured to include a shooting device such as a camera or a video camera for shooting image data of a real scene, and a storage device for storing the shot image data, wherein the shot image data includes picture data and/or video data.
The object identification module 202 identifies image data of a reference object from the image data of the real scene acquired from the scene data module 201 according to relevant information of a virtual object in the virtual scene, where the relevant information of the virtual object includes material parameters of the virtual object and/or position information of the virtual object in the virtual scene. Specifically, the object identification module 202 may identify, from the image data of the real scene acquired from the scene data module 201, the same type of object having a similar material to the virtual object as the reference object according to the material parameter of the virtual object, and may also identify, from the image data of the real scene acquired from the scene data module 201, the object adjacent to the virtual object as the reference object according to the position information of the virtual object in the virtual scene.
The data processing module 203 performs illumination migration on the image data of the reference object identified in the object identification module 202 based on an artificial intelligence algorithm to obtain a target virtual object. Specifically, when the data processing module 203 performs illumination migration on image data of objects of the same type, which are similar to the virtual objects in material, the 3D model of the virtual objects in the virtual scene is primarily rendered, the illumination data of the objects of the same type is migrated to the primarily rendered virtual objects based on the artificial intelligence algorithm model, and then the target virtual object can be obtained by performing secondary rendering on the virtual objects after the illumination migration. The preliminary rendering refers to rendering the virtual object by using the ambient light according to the gesture data of the current user to obtain the image data of the virtual object at the current visual angle of the user, wherein the gesture data of the user includes data such as a position relation between the user and the virtual object, a three-dimensional coordinate of the user, a gesture of the user and the like. The secondary rendering is to perform shadow recognition on the virtual object with illumination to obtain the illumination direction and shadow data of the virtual object, and render the shadow image of the virtual object according to the illumination direction, the shadow data and the placed plane position of the virtual object. In addition, when the data processing module 203 performs illumination migration on the image data of the object adjacent to the virtual object, it may also perform preliminary rendering on the 3D model of the virtual object in the virtual scene, and then migrate the illumination data and the shadow data of the object adjacent to the virtual object based on the artificial intelligence algorithm model to the preliminarily rendered virtual object to obtain the target virtual object.
According to the method and the device for rendering the virtual object in the augmented reality, provided by the embodiment of the invention, the illumination of the reference object in the real scene can be transferred to the virtual object in the virtual scene, so that the virtual object and the real scene are really fused together, and the requirement of a user on visual experience is met.
While the present invention has been shown and described with reference to certain exemplary embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims and their equivalents.

Claims (16)

1. A method of rendering virtual objects in augmented reality, the method comprising the steps of:
s1: acquiring image data of a real scene;
s2: identifying image data of a reference object from the acquired image data of the real scene according to the relevant information of the virtual object in the virtual scene;
s3: an artificial intelligence algorithm is used for transferring the illumination data of the image data of the reference object to the image data of the virtual object after preliminary rendering so as to obtain a target virtual object,
wherein the virtual object is rendered using ambient light according to the pose data of the current user to obtain image data of a preliminary rendered virtual object for the current perspective of the user.
2. The method according to claim 1, wherein the information related to the virtual object comprises material parameters of the virtual object and/or position information of the virtual object in the virtual scene.
3. The method of claim 2, wherein the step S2 comprises:
and identifying image data of objects of the same category similar to the virtual object in material from the acquired image data of the real scene according to the material parameters of the virtual object, or identifying image data of objects adjacent to the virtual object in position from the acquired image data of the real scene according to the position information of the virtual object in the virtual scene.
4. The method of claim 3, wherein the step S3 comprises:
and transferring the illumination data of the image data of the same type of objects with similar materials to the virtual object into the image data of the virtual object after preliminary rendering, or transferring the illumination data of the image data of the object adjacent to the virtual object into the image data of the virtual object after preliminary rendering.
5. The method of claim 4, wherein the step of migrating the illumination data of the image data of the same class of objects similar to the virtual object in material to the image data of the preliminarily rendered virtual object comprises:
transferring the illumination data of the objects of the same category to the preliminarily rendered virtual object based on an artificial intelligence algorithm to obtain image data of the virtual object with illumination;
and performing secondary rendering on the image data of the virtual object with illumination to obtain a target virtual object.
6. The method of claim 4, wherein migrating the lighting data of the image data of objects adjacent to the virtual object location into the preliminarily rendered image data of the virtual object comprises:
and transferring the illumination data and the shadow data of the object adjacent to the virtual object position to the preliminarily rendered virtual object based on an artificial intelligence algorithm to obtain a target virtual object.
7. The method of claim 5, wherein the step of secondarily rendering the image data of the illuminated virtual object to derive a target virtual object comprises:
and performing shadow recognition on the image data of the virtual object with illumination to acquire the illumination direction of the virtual object, and rendering a shadow image of the virtual object according to the acquired illumination direction of the virtual object and the plane position where the virtual object is placed.
8. An apparatus for rendering virtual objects in augmented reality, the apparatus comprising:
a scene data module configured to acquire image data of a real scene;
the object identification module is configured to identify image data of a reference object from the acquired image data of the real scene according to relevant information of a virtual object in the virtual scene;
the data processing module is configured to use an artificial intelligence algorithm to migrate the illumination data of the image data of the reference object into the image data of the preliminarily rendered virtual object to obtain a target virtual object, wherein the virtual object is rendered using ambient light according to the posture data of the current user to obtain the image data of the preliminarily rendered virtual object at the current view angle of the user.
9. The apparatus according to claim 8, wherein the information related to the virtual object comprises material parameters of the virtual object and/or position information of the virtual object in the virtual scene.
10. The apparatus of claim 9, wherein the object identification module is configured to:
and identifying image data of objects of the same category with similar materials to the virtual objects from the acquired image data of the real scene according to the material parameters of the virtual objects, or identifying image data of objects adjacent to the positions of the virtual objects from the acquired image data of the real scene according to the position information of the virtual objects in the virtual scene.
11. The apparatus of claim 10, wherein the data processing module is configured to:
and transferring the illumination data of the image data of the same class of objects with similar materials to the virtual object into the image data of the virtual object after preliminary rendering, or transferring the illumination data of the image data of the object adjacent to the position of the virtual object into the image data of the virtual object after preliminary rendering.
12. The apparatus of claim 11, wherein the data processing module, when performing the illumination migration on the image data of the homogeneous objects similar in material to the virtual object, is configured to:
transferring the illumination data of the objects of the same category to the preliminarily rendered virtual object based on an artificial intelligence algorithm to obtain image data of the virtual object with illumination;
and performing secondary rendering on the image data of the virtual object with illumination to obtain a target virtual object.
13. The apparatus of claim 11, wherein the data processing module, when performing the photopheresis of the image data of the object proximate to the virtual object location, is configured to:
and transferring the illumination data and the shadow data of the object adjacent to the virtual object position to the preliminarily rendered virtual object based on an artificial intelligence algorithm to obtain a target virtual object.
14. The apparatus of claim 12, wherein the data processing module is configured to: and performing shadow recognition on the image data of the virtual object with illumination to acquire the illumination direction of the virtual object, and rendering a shadow image of the virtual object according to the illumination direction and the placed plane position of the virtual object.
15. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, causes the processor to carry out the method according to any one of claims 1-7.
16. A computer arrangement comprising a processor and a memory storing a computer program, wherein the computer program, when executed by the processor, causes the processor to carry out the method according to any one of claims 1-7.
CN201910779793.4A 2019-08-22 2019-08-22 Method and device for rendering virtual object in augmented reality Active CN110503711B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910779793.4A CN110503711B (en) 2019-08-22 2019-08-22 Method and device for rendering virtual object in augmented reality

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910779793.4A CN110503711B (en) 2019-08-22 2019-08-22 Method and device for rendering virtual object in augmented reality

Publications (2)

Publication Number Publication Date
CN110503711A CN110503711A (en) 2019-11-26
CN110503711B true CN110503711B (en) 2023-02-21

Family

ID=68588797

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910779793.4A Active CN110503711B (en) 2019-08-22 2019-08-22 Method and device for rendering virtual object in augmented reality

Country Status (1)

Country Link
CN (1) CN110503711B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111091492B (en) * 2019-12-23 2020-09-04 韶鼎人工智能科技有限公司 Face image illumination migration method based on convolutional neural network
CN111199573B (en) * 2019-12-30 2023-07-07 成都索贝数码科技股份有限公司 Virtual-real interaction reflection method, device, medium and equipment based on augmented reality
CN111275802B (en) * 2020-01-19 2023-04-21 杭州群核信息技术有限公司 PBR material rendering method and system based on VRAY
CN111292408B (en) * 2020-01-21 2022-02-01 武汉大学 Shadow generation method based on attention mechanism
CN113487662A (en) * 2021-07-02 2021-10-08 广州博冠信息科技有限公司 Picture display method and device, electronic equipment and storage medium
CN114385289B (en) * 2021-12-23 2024-01-23 北京字跳网络技术有限公司 Rendering display method and device, computer equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102881011A (en) * 2012-08-31 2013-01-16 北京航空航天大学 Region-segmentation-based portrait illumination transfer method
CN108305328A (en) * 2018-02-08 2018-07-20 网易(杭州)网络有限公司 Dummy object rendering intent, system, medium and computing device
CN109118571A (en) * 2018-07-23 2019-01-01 太平洋未来科技(深圳)有限公司 Method, apparatus and electronic equipment based on light information rendering virtual objects

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102881011A (en) * 2012-08-31 2013-01-16 北京航空航天大学 Region-segmentation-based portrait illumination transfer method
CN108305328A (en) * 2018-02-08 2018-07-20 网易(杭州)网络有限公司 Dummy object rendering intent, system, medium and computing device
CN109118571A (en) * 2018-07-23 2019-01-01 太平洋未来科技(深圳)有限公司 Method, apparatus and electronic equipment based on light information rendering virtual objects

Also Published As

Publication number Publication date
CN110503711A (en) 2019-11-26

Similar Documents

Publication Publication Date Title
CN110503711B (en) Method and device for rendering virtual object in augmented reality
CN111328396B (en) Pose estimation and model retrieval for objects in images
US20180012411A1 (en) Augmented Reality Methods and Devices
US10977818B2 (en) Machine learning based model localization system
US20240062488A1 (en) Object centric scanning
JP6011102B2 (en) Object posture estimation method
CN110070621B (en) Electronic device, method for displaying augmented reality scene and computer readable medium
US9710912B2 (en) Method and apparatus for obtaining 3D face model using portable camera
CN101305401B (en) Method for processing stereo video for gaming
US20180321776A1 (en) Method for acting on augmented reality virtual objects
CN108010118B (en) Virtual object processing method, virtual object processing apparatus, medium, and computing device
JP2018091667A (en) Information processing device, method for controlling information processing device, and program
CN109033989B (en) Target identification method and device based on three-dimensional point cloud and storage medium
CN107004279A (en) Natural user interface camera calibrated
Rambach et al. Learning 6dof object poses from synthetic single channel images
TW201839721A (en) Computer-implemented 3d model analysis method, electronic device, and non-transitory computer readable storage medium
US20200057778A1 (en) Depth image pose search with a bootstrapped-created database
US20170064284A1 (en) Producing three-dimensional representation based on images of a person
JP6016242B2 (en) Viewpoint estimation apparatus and classifier learning method thereof
JP2019211981A (en) Information processor, information processor controlling method and program
CN109799905B (en) Hand tracking method and advertising machine
EP4224429A1 (en) Systems and methods for visually indicating stale content in environment model
JP2021111375A (en) Augmentation of video flux of real scene
JP5975484B2 (en) Image processing device
KR101828340B1 (en) Method and apparatus for object extraction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant