CN108010118B - Virtual object processing method, virtual object processing apparatus, medium, and computing device - Google Patents

Virtual object processing method, virtual object processing apparatus, medium, and computing device Download PDF

Info

Publication number
CN108010118B
CN108010118B CN201711219793.6A CN201711219793A CN108010118B CN 108010118 B CN108010118 B CN 108010118B CN 201711219793 A CN201711219793 A CN 201711219793A CN 108010118 B CN108010118 B CN 108010118B
Authority
CN
China
Prior art keywords
shadow
real
virtual object
scene
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711219793.6A
Other languages
Chinese (zh)
Other versions
CN108010118A (en
Inventor
朱斯衎
陈艳蕾
赵辰
丛林
邵文坚
郭于晨
上官福豪
刘
唐秦崴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Yixian Advanced Technology Co., Ltd.
Original Assignee
Hangzhou Yixian Advanced Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Yixian Advanced Technology Co ltd filed Critical Hangzhou Yixian Advanced Technology Co ltd
Priority to CN201711219793.6A priority Critical patent/CN108010118B/en
Publication of CN108010118A publication Critical patent/CN108010118A/en
Application granted granted Critical
Publication of CN108010118B publication Critical patent/CN108010118B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/60Shadow generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection

Abstract

The embodiment of the invention provides a virtual object processing method, which comprises the following steps: obtaining proposed shadow information of a virtual object for an augmented reality scene; analyzing the real scene to determine a real object in the real scene for bearing a shadow of the virtual object; determining parameter information of a surface of the real object for receiving a shadow of the virtual object; and processing the formulated shadow information according to the parameter information of the surface to determine the shadow of the virtual object in the real scene. When the proposed shadow is projected on different real objects, the proposed shadow information is corrected according to the parameter information of the surface of the real object. In addition, the embodiment of the invention provides a virtual object processing device, a medium and a computing device.

Description

Virtual object processing method, virtual object processing apparatus, medium, and computing device
Technical Field
The embodiment of the invention relates to the technical field of augmented reality, in particular to a virtual object processing method, a virtual object processing device, a medium and a computing device.
Background
This section is intended to provide a background or context to the embodiments of the invention that are recited in the claims. The description herein is not admitted to be prior art by inclusion in this section.
The augmented reality technology is a technology which enables a virtual world to be combined with a real scene through the position and angle of a camera shooting and an image analysis technology. The camera generally has the basic functions of video shooting/transmission, static image capture and the like, and after the color of an image is collected by a lens, the image is processed and converted into a digital signal which can be identified by a computer by a photosensitive component circuit and a control component in the camera, and then the digital signal is input into the computer by a parallel port and a USB connection and then is restored by software.
At present, some technologies for enhancing a real scene by fusing a virtual object in a virtual world into the real scene have appeared, and technical details for projecting a shadow of the virtual object to the real scene are also considered when fusing the virtual object into the real scene.
Disclosure of Invention
However, for the reason of low technical level, when the shadow of the virtual object is projected in the real scene in the prior art, the projected shadow can be generally displayed only in a planar form, so that the reality sense is not strong when the shadow of the virtual object is displayed in the real scene, and the display effect is poor.
Therefore, an improved virtual object processing method is needed to make the shadow of the virtual object more consistent with the actual projection effect when the shadow is projected into the real scene.
In this context, embodiments of the present invention are intended to provide a virtual object processing method, and an apparatus, medium, and computing device thereof.
In a first aspect of embodiments of the present invention, a virtual object processing method is provided, including: obtaining proposed shadow information of a virtual object for an augmented reality scene; analyzing the real scene to determine a real object in the real scene for bearing a shadow of the virtual object; determining parameter information of a surface of the real object for receiving the shadow of the virtual object; and processing the determined shadow information according to the parameter information of the surface to determine the shadow of the virtual object in the real scene.
In an embodiment of the present invention, the processing the proposed shadow information according to the parameter information of the surface to determine the shadow of the virtual object in the real scene includes: determining the geometry of the surface according to the parameter information of the surface; and deforming the first determined shadow corresponding to the determined shadow information according to the geometric shape of the surface to obtain a second determined shadow so as to determine the shadow of the virtual object in the real scene.
In another embodiment of the present invention, the processing the proposed shadow information according to the parameter information of the surface to determine the shadow of the virtual object in the real scene further comprises: determining the material of the surface according to the parameter information of the surface; performing color setting on the second proposed shadow according to the material of the surface to determine the shadow of the virtual object in the real scene; or setting the color of the first determined shadow according to the material of the surface to obtain a third determined shadow, and combining the second determined shadow and the third determined shadow to determine the shadow of the virtual object in the real scene.
In yet another embodiment of the present invention, the obtaining proposed shadow information for a virtual object of an augmented reality scene comprises: determining main light source information in the real scene; pre-calculating the projection direction and the display area of the shadow of the virtual object in the real scene according to the main light source information; and determining the proposed shadow information according to the pre-calculation result of the shadow of the virtual object in the real scene.
In a further embodiment of the present invention, the determining the main light source information in the real scene includes: acquiring a dynamic scene graph obtained by shooting the real scene through a camera; and matching the dynamic scene graph by using a computer vision algorithm to determine the main light source information in the real scene.
In a further embodiment of the present invention, the analyzing the real scene to determine a real object in the real scene for bearing a shadow of the virtual object includes: analyzing the real scene and determining a local scene around the virtual object; determining a first real object contained in the local scene; and using the first real object as the real object for bearing the shadow of the virtual object.
In yet another embodiment of the present invention, before the first real object is used as the real object for bearing the shadow of the virtual object, the method further includes: and judging whether the first real object is in the casting direction of the shadow of the virtual object, wherein if the first real object is in the casting direction of the shadow of the virtual object, the first real object is used as the real object for bearing the shadow of the virtual object.
In a second aspect of the embodiments of the present invention, there is provided a virtual object processing apparatus including an acquisition module, an analysis module, a determination module, and a processing module. The acquisition module is used for acquiring proposed shadow information of a virtual object used for an augmented reality scene; the analysis module is used for analyzing the real scene to determine a real object used for bearing the shadow of the virtual object in the real scene; the determining module is used for determining parameter information of a surface of the real object for receiving the shadow of the virtual object; and the processing module is used for processing the formulated shadow information according to the parameter information of the surface so as to determine the shadow of the virtual object in the real scene.
In one embodiment of the present invention, the processing module includes a first determining unit and a deforming unit. The first determining unit is used for determining the geometric shape of the surface according to the parameter information of the surface; and the deformation unit is used for deforming the first determined shadow corresponding to the determined shadow information according to the geometric shape of the surface to obtain a second determined shadow so as to determine the shadow of the virtual object in the real scene.
In another embodiment of the present invention, the processing module further includes a second determining unit, a first setting unit, or a second setting unit. The second determining unit is used for determining the material of the surface according to the parameter information of the surface; the first setting unit is used for carrying out color setting on the second simulated shadow according to the material of the surface so as to determine the shadow of the virtual object in the real scene; or the second setting unit is used for setting the color of the first fitted shadow according to the material of the surface to obtain a third fitted shadow, and the shadow of the virtual object in the real scene is determined by combining the second fitted shadow and the third fitted shadow.
In another embodiment of the present invention, the above-mentioned obtaining module includes a third determining unit, a calculating unit and a fourth determining unit. The third determining unit is used for determining the main light source information in the real scene; the computing unit is used for pre-computing the projection direction and the display area of the shadow of the virtual object in the real scene according to the main light source information; and the fourth determining unit is used for determining the proposed shadow information according to the pre-calculation result of the shadow of the virtual object in the real scene.
In still another embodiment of the present invention, the third determining unit includes an obtaining subunit and a matching subunit. The acquisition subunit is used for acquiring a dynamic scene graph obtained by shooting the real scene through a camera; and the matching subunit is used for matching the dynamic scene graph by using a computer vision algorithm so as to determine the main light source information in the real scene.
In yet another embodiment of the present invention, the above analysis module includes an analysis unit, a fifth determination unit, and a sixth determination unit. The analysis unit is used for analyzing the real scene and determining a local scene around the virtual object; the fifth determining unit is used for determining a first real object contained in the local scene; and a sixth determining unit configured to determine the first real object as the real object carrying the shadow of the virtual object.
In a further embodiment of the present invention, the apparatus further includes a determining module, configured to determine whether the first real object is in a casting direction of the shadow of the virtual object before the first real object is used as the real object for bearing the shadow of the virtual object, wherein if the first real object is in the casting direction of the shadow of the virtual object, the first real object is used as the real object for bearing the shadow of the virtual object.
In a third aspect of embodiments of the present invention, there is provided a medium storing computer-executable instructions that, when executed by a processing unit, are configured to implement a virtual object processing method as described above.
In a fourth aspect of embodiments of the present invention, there is provided a computing device comprising a processing unit and a storage unit. The storage unit stores computer-executable instructions which, when executed by the processing unit, are operable to implement the virtual object processing method as described above.
According to the virtual object processing method and the virtual object processing apparatus of the embodiment of the invention, when the proposed shadow is projected on different real objects, the display form of the proposed shadow is corrected according to the surface parameter information of the real objects, for example, when the surface of the real object is a curved surface, the display form of the proposed shadow is corrected, so that the proposed shadow is more suitable for a real scene when displayed on the curved surface. The method of the invention makes the shadow of the virtual object have stronger reality sense when being displayed in the real scene, thereby obviously improving the display effect of the shadow and bringing better experience to users.
Drawings
The above and other objects, features and advantages of exemplary embodiments of the present invention will become readily apparent from the following detailed description read in conjunction with the accompanying drawings. Several embodiments of the invention are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which:
FIG. 1A is a schematic diagram of an application scenario showing shadows in the related art;
FIG. 1B schematically illustrates an application scenario diagram according to an embodiment of the present invention;
FIG. 2 schematically shows a flow diagram of a virtual object processing method according to an embodiment of the invention;
FIG. 3 schematically illustrates a flow chart for processing proposed shadow information based on parametric information of a surface, in accordance with an embodiment of the present invention;
FIG. 4 schematically illustrates a flow diagram for processing proposed shadow information based on parametric information of a surface, in accordance with another embodiment of the present invention;
FIG. 5 schematically illustrates a flow diagram for processing proposed shadow information based on parametric information of a surface, in accordance with another embodiment of the present invention;
FIG. 6 schematically illustrates a flow diagram for obtaining proposed shadow information for a virtual object of an augmented reality scene according to an embodiment of the invention;
FIG. 7 schematically illustrates a flow chart for determining primary light source information in a real scene according to an embodiment of the invention;
FIG. 8 schematically illustrates a flow diagram for analyzing a real scene according to an embodiment of the invention;
FIG. 9 schematically shows a block diagram of a virtual object processing apparatus according to an embodiment of the present invention;
FIG. 10 schematically illustrates a program product for storing instructions for implementing virtual object processing according to an embodiment of the invention; and
fig. 11 schematically shows a block diagram of a computing device for implementing a virtual object processing method according to an embodiment of the present invention.
In the drawings, the same or corresponding reference numerals indicate the same or corresponding parts.
Detailed Description
The principles and spirit of the present invention will be described with reference to a number of exemplary embodiments. It is understood that these embodiments are given solely for the purpose of enabling those skilled in the art to better understand and to practice the invention, and are not intended to limit the scope of the invention in any way. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
As will be appreciated by one skilled in the art, embodiments of the present invention may be embodied as a system, apparatus, device, method, or computer program product. Accordingly, the present disclosure may be embodied in the form of: entirely hardware, entirely software (including firmware, resident software, micro-code, etc.), or a combination of hardware and software.
According to the embodiment of the invention, a method, a medium, a device and a computing device for processing a virtual object are provided.
In this context, it is to be understood that the term mapping is referred to as wrapping a bitmap stored in memory onto the surface of a 3D rendered object in computer graphics, and that texture mapping provides rich detail to the object, simulating a complex appearance in a simple manner. Rendering refers to the process of generating an image from a virtual object with a computer program. Photometric sphere method refers to a method of estimating the illumination in a real scene by placing a uniform rough white small sphere in the real scene. The computer vision means that a camera and a computer are used for replacing human eyes to perform machine vision such as identification, tracking and measurement on a target, further image processing is performed, and the image is processed into an image more suitable for human eye observation or transmitted to an instrument for detection by the computer.
Moreover, any number of elements in the drawings are by way of example and not by way of limitation, and any nomenclature is used solely for differentiation and not by way of limitation.
The principles and spirit of the present invention are explained in detail below with reference to several representative embodiments of the invention.
Summary of The Invention
The inventor finds that the current ways of casting shadows in Augmented Reality (AR) are as follows:
1. the method is simple to realize, but the baked shadow is difficult to keep consistent with the real illumination in the scene, and the shadow can only be projected on a plane, so that the reality is lost if a curved bearing object is met.
2. And collecting scene light by using a photometric sphere method or an image splicing method to acquire illumination information of the scene. This method improves the authenticity as compared with method 1, but requires a pretreatment such as photometry using an auxiliary tool such as a photometry ball, which is inconvenient. And the shadow can also only be cast on a plane.
Based on the above analysis, the inventor considers that the shadow of the virtual object is processed by analyzing the parameter information of the real object, so that the displayed shadow is more in line with the real display effect when displayed on the real object. According to the technical concept of the present disclosure, when the proposed shadow is projected on different real objects, the representation form of the proposed shadow is modified according to the surface parameter information of the real objects, for example, when the surface of the real object is a curved surface, the representation form of the proposed shadow is modified, so that the proposed shadow is more suitable for the real scene when being displayed on the curved surface. The method of the invention makes the shadow of the virtual object have stronger reality sense when being displayed in the real scene, thereby obviously improving the display effect of the shadow and bringing better experience to users.
Having described the general principles of the invention, various non-limiting embodiments of the invention are described in detail below.
Application scene overview
First, referring to fig. 1A and 1B, an application scenario of a virtual object processing method and a virtual object processing apparatus according to the related art and the embodiment of the present invention will be described in detail.
Fig. 1A schematically illustrates a schematic diagram of an application scene showing shadows in the related art, as shown in fig. 1A, in the application scene, a real object 101, a virtual object 102, a tree 103, and a light source 104 are included. Under the illumination of the light source 104, the real object 101, the virtual object 102 and the tree 103 will generate dark parts, i.e. shadows, and in the related art, the shadows of the virtual object 102 penetrate through the real object 101 and can only be projected on a plane, which is different from the projection mode in real life.
Fig. 1B schematically shows an application scene according to an embodiment of the present invention, as shown in fig. 1B, in which when the virtual object 102 is close to the real object 101, a partial shadow of the virtual object 102 (for example, the upper half of the virtual object 102 in fig. 1B) is projected on the surface of the real object 101, and the lower half of the virtual object 102 is projected on the plane, which is consistent with the projection mode in real life.
It should be noted that the surface of the real object 101 in the application scene may be smooth and flat, or may be an irregular curved surface, and the application does not limit the attributes such as the shape and material of the real object 101, and may display a suitable shadow according to the real objects of different shapes and/or materials.
Exemplary method
A method of virtual object processing according to an exemplary embodiment of the present invention is described below with reference to fig. 2 in conjunction with the application scenario of fig. 1B. It should be noted that the above application scenarios are merely illustrated for the convenience of understanding the spirit and principles of the present invention, and the embodiments of the present invention are not limited in this respect. Rather, embodiments of the present invention may be applied to any scenario where applicable.
Fig. 2 schematically shows a flow chart of a virtual object processing method according to an embodiment of the invention.
As shown in fig. 2, the virtual object processing method according to an embodiment of the present invention includes operations S210 to S240.
In operation S210: proposed shadow information for a virtual object of an augmented reality scene is obtained.
In operation S220: the real scene is analyzed to determine real objects in the real scene that carry shadows of the virtual objects.
In operation S230: parameter information of a surface of the real object for receiving the shadow of the virtual object is determined.
In operation S240, the proposed shadow information is processed according to the parameter information of the surface to determine a shadow of the virtual object in the real scene.
According to an embodiment of the present disclosure, the virtual object may be a virtual object in a space generated using a computer device simulation. The virtual object to be simulated may be an object that does not exist in the real scene, or may be an object that can find a prototype in the real scene. When the virtual object is added to the real scene with the light source, the light is blocked by the opaque object to generate a dark range, i.e. shadow will be displayed. The proposed shadow information may include information such as outline information of the shadow, area size of the shadow, and the like, and the proposed shadow information corresponding to different virtual objects may be different.
According to an embodiment of the present disclosure, the real object is an object existing in a real scene, and may be, for example, a stone, a grass, a metal object, or the like. When the light is blocked by the virtual object, the real scene is analyzed to obtain a real object bearing the shadow. The surface parameter information of the real object is used for characterizing the surface properties of the real object, and may include, for example, roughness, metal degree, hardness, glossiness and the like. And processing the formulated shadow information based on the surface parameters of the real object, so as to accord with the display effect in a real scene. For example, the real object is a stone, the virtual object is a dinosaur, information such as color, hardness and shape of the stone is obtained through analysis, and the proposed shadow information of the dinosaur is processed based on the parameter information of the stone, so that when the shadow of the dinosaur is projected on the stone, the actual projection effect is better met.
Because the shadow of the virtual object is projected to the real scene in the related technology, the shadow can be projected on the plane only, and if the shadow meets the bearing objects with different attribute information such as the curved surface, the reality is lost. However, according to the embodiment of the present disclosure, when the proposed shadow is projected on different real objects, the representation form of the proposed shadow is modified according to the surface parameter information of the real objects, for example, when the surface of the real object is a curved surface, the representation form of the proposed shadow is modified, so that the proposed shadow is more suitable for the real scene when displayed on the curved surface. The method of the invention makes the shadow of the virtual object have stronger reality sense when being displayed in the real scene, thereby obviously improving the display effect of the shadow and bringing better experience to users.
The method shown in fig. 2 is further described with reference to fig. 3-8 in conjunction with specific embodiments.
FIG. 3 schematically shows a flow chart for processing proposed shadow information based on parametric information of a surface, according to an embodiment of the invention.
As shown in fig. 3, according to an embodiment of the present disclosure, the processing of the proposed shadow information according to the parameter information of the surface to determine the shadow of the virtual object in the real scene includes operations S241 to S242. Wherein:
in operation S241, the geometry of the surface is determined according to the parameter information of the surface.
In operation S242, the first proposed shadow corresponding to the proposed shadow information is deformed according to the geometric shape of the surface to obtain a second proposed shadow, so as to determine a shadow of the virtual object in the real scene.
According to embodiments of the present disclosure, different real objects may have different geometries, including regular geometries and irregular geometries, for example, the regular geometries may be cuboids, spheres, etc., and the irregular geometries may be polyhedrons. It should be noted that the geometric surface of the real object may be uneven.
According to an embodiment of the present disclosure, for example, when a shadow is cast on a surface of an elliptical shape, the shadow is attached on the surface of the elliptical shape according to the elliptical shape, so that the shadow has a different exhibiting effect than when cast on a plane.
According to the embodiment of the disclosure, because different objects have different geometric shapes, the first formulated shadow is deformed according to the geometric shapes of the objects, so that the projected shadow more conforms to a real projection effect, and the problem that the projected shadow can only be projected on a plane but not on a curved surface when projected in an augmented reality scene in the related art is solved.
FIG. 4 schematically shows a flow chart for processing proposed shadow information based on parametric information of a surface according to another embodiment of the invention. In this embodiment, operations S243 through S244 may be included in addition to operations S241 through S242 described above with reference to fig. 3. The description of operations S241 to S242 is omitted here for the sake of brevity of description.
As shown in fig. 4, according to the embodiment of the present disclosure, the processing the proposed shadow information according to the parameter information of the surface to determine the shadow of the virtual object in the real scene further includes operations S243 to S244. Wherein:
in operation S243, a material of the surface is determined according to the parameter information of the surface.
In operation S244, the second posed shadow is color-set according to the material of the surface to determine the shadow of the virtual object in the real scene.
According to an embodiment of the present disclosure, the material information may include various information, such as roughness, metal degree, hardness, color, and the like, and different real objects may be made of different materials. For example, when the real object is stone, the material of the stone may be silicate or quartz, and the color of the stone may be lighter or darker. Or, when the real object is a copper sheet, the material of the copper sheet may be copper oxide, and the color of the copper sheet may be slightly bright or dark.
According to the embodiment of the disclosure, after the first fitted shadow corresponding to the fitted shadow information is deformed according to the geometric shape of the surface to obtain the second fitted shadow, the color setting is performed on the second fitted shadow based on the material of the surface, and the shadow is further corrected. For example, when a shadow is attached to the surface of an oval stone, the color of the shadow when cast is changed according to the material information of the stone, so that the shadow is more realistic.
The method comprises the steps of deforming a first set shadow corresponding to set shadow information according to the geometric shape of the surface, setting proper colors according to material information, and rendering the first set shadow, wherein the proper colors can be obtained after the information of a real object is obtained, modeling the real object, rendering a virtual model of the real object, and attaching a depth map to the surface of the virtual model. The texture coordinates of the virtual model are: and t is MVP (wherein M is a coordinate transformation matrix from the local space of the virtual model to the world space, V is a coordinate transformation matrix from the world space to the local space of the light, P is a projection matrix of the light source, and P is the position of any point P on the surface of the virtual model in the local space of the model). And taking the depth value d on the depth map by t. The distance from p to the center of the lamp light is D. Compare D to D, if D > D, then the point is in shadow.
FIG. 5 schematically shows a flow chart for processing proposed shadow information based on parametric information of a surface according to another embodiment of the invention.
As shown in fig. 5, according to an embodiment of the present disclosure, the processing the proposed shadow information according to the parameter information of the surface to determine the shadow of the virtual object in the real scene further includes operations S245 to S246. Wherein:
in operation S245, a material of the surface is determined according to the parameter information of the surface.
In operation S246, the first proposed shadow is color-set according to the material of the surface to obtain a third proposed shadow, and the second proposed shadow and the third proposed shadow are combined to determine the shadow of the virtual object in the real scene.
According to an embodiment of the present disclosure, the process of determining the material of the surface may include the following steps: firstly, denoising, segmenting and edge detecting are carried out on a real scene graph; then, detecting straight lines in the image to find out a straight line set which can possibly form a known aggregate; calculating the information of the position, the size, the rotation and the like of the actual object according to the straight line set; and matching the closest material by using an image matching algorithm according to the texture information of the actual object.
According to an embodiment of the present disclosure, the first proposed shadow is directly color-set based on the material of the surface. For example, when the material of the real object surface is copper, the color of the first proposed shadow is set to be darker gray, and a third proposed shadow is obtained. And after the shadow is attached to the surface of the oval real object, changing the shape of the shadow during shadow casting according to the geometric shape information of the real object, and deforming the first determined shadow to obtain a second determined shadow. And finally, the second determined shadow and the third determined shadow are comprehensively considered, so that the finally projected shadow has more reality.
Through the embodiment of the disclosure, because the materials of different objects are different, the color of the shadow is adjusted according to the color of the shadow determined by the material information, so that the shadow cast can be matched with the material of the object bearing the shadow, and the display effect of the cast shadow is improved.
FIG. 6 schematically illustrates a flow chart for obtaining proposed shadow information for a virtual object of an augmented reality scene according to an embodiment of the invention.
As shown in fig. 6, according to an embodiment of the present disclosure, the acquiring proposed shadow information for a virtual object of an augmented reality scene includes operations S211 to S213.
In operation S211, main light source information in a real scene is determined.
In operation S212, the casting orientation and the display area of the shadow of the virtual object in the real scene are pre-calculated according to the main light source information.
In operation S213, and according to the pre-calculated result of the shadow of the virtual object in the real scene, the proposed shadow information is determined.
According to the embodiment of the present disclosure, the main light source information in the real scene may include other information such as light intensity information of the main light source, position information of the main light source, and the like, which may be obtained by analyzing picture information when a picture of the real scene is acquired, or may be light source information manually input. Specifically, the position information of the main light source and the real scene image can be obtained by matching through a computer vision algorithm. According to an embodiment of the present disclosure, a shadow mapping method may be used to obtain the approximate orientation and area of the shadow cast, starting from the light source, a set of rays R is emitted, for each of which R, until the virtual object surface that was the earliest to intersect R is found. All these rays become a depth map, after which all virtual object surfaces that intersect are in shadow.
According to the embodiment of the disclosure, the shadow of the virtual object in the real scene can be preliminarily determined according to the main light source information, so that the shadow of the virtual object to be projected basically conforms to the real scene, and a method for determining the shadow is provided.
Fig. 7 schematically shows a flow chart for determining primary light source information in a real scene according to an embodiment of the invention.
As shown in fig. 7, the determining of the main light source information in the real scene according to the embodiment of the present disclosure includes operations S2111 to S2112.
In operation S2111, a dynamic scene graph obtained by capturing a real scene through a camera is acquired.
In operation S2112, the dynamic scene graph is matched using a computer vision algorithm to determine the main light source information in the real scene.
According to an embodiment of the present disclosure, a large number of sets of real scene pictures may be stored in the database, and these real scene pictures all carry the primary light source information at the time of acquisition. And finding the picture which is closest to the dynamic scene picture obtained by shooting the real scene through the camera in the picture set of the real scene by using an image matching algorithm, such as an image matching algorithm based on the feature points. The main light source information in the real scene can be determined according to the information carried by the picture.
According to the embodiment of the disclosure, the shot dynamic scene graph is matched through a computer vision algorithm, and the method for accurately determining the main light source information in the real scene is provided.
Fig. 8 schematically shows a flow chart for analyzing a real scene according to an embodiment of the invention.
As shown in fig. 8, according to an embodiment of the present disclosure, analyzing the real scene to determine a real object for carrying a shadow of the virtual object in the real scene may include operations S221 to S223.
In operation S221, the real scene is analyzed, and a local scene around the virtual object is determined.
In operation S222, a first real object included in the local scene is determined.
In operation S223, the first real object is treated as a real object for bearing a shadow of the virtual object.
According to the embodiment of the present disclosure, the local scene area around the virtual object is related to the number of objects existing in the real scene, for example, when the number of objects existing in the real scene is large, that is, when there are many objects that can be used for bearing shadows, one of the real objects in the predetermined area around the virtual object may be used as the first real object. When the number of objects existing in the real scene is small, any one real object in the real scene may be used as the first real object.
According to the embodiment of the disclosure, in the real scene, the real object in the local scene around the virtual object can be used for bearing the shadow of the virtual object, and the real object in the local scene is analyzed, so that the calculation amount of the system in shadow casting is reduced, and the casting efficiency is improved.
According to an embodiment of the present disclosure, before the first real object is taken as the real object for carrying the shadow of the virtual object, the virtual object processing method further includes determining whether the first real object is in a casting direction of the shadow of the virtual object, wherein if the first real object is in the casting direction of the shadow of the virtual object, the first real object is taken as the real object for carrying the shadow of the virtual object. If the first real object is not in the casting direction of the shadow of the virtual object, it is necessary to determine the first real object from the new determination, and to take the re-determined first real object as the real object for carrying the shadow of the virtual object.
By the embodiment of the disclosure, whether the real object is in the casting direction of the shadow or not can be judged, so that the determined real object cannot bear the shadow of the virtual object, the shadow is prevented from being wrongly cast on the real object in the non-casting direction, and the casting reality is improved.
Exemplary devices
Having described the medium of an exemplary embodiment of the present invention, a virtual object processing apparatus of an exemplary embodiment of the present invention for use with reference to fig. 9 will next be described with reference to fig. 9.
Fig. 9 schematically shows a block diagram of a virtual object processing apparatus according to an embodiment of the present invention.
As shown in fig. 9, the virtual object processing apparatus 900 includes an obtaining module 910, an analyzing module 920, a determining module 930, and a processing module 940.
The obtaining module 910 is configured to obtain proposed shadow information for a virtual object of an augmented reality scene.
The analysis module 920 is configured to analyze the real scene to determine a real object in the real scene for carrying a shadow of the virtual object.
The determining module 930 is for determining parameter information of a surface of the real object for receiving the shadow of the virtual object.
The processing module 940 is configured to process the proposed shadow information according to the parameter information of the surface to determine a shadow of the virtual object in the real scene.
Because the shadow of the virtual object is projected to the real scene in the related technology, the shadow can be projected on the plane only, and if the shadow meets the bearing objects with different attribute information such as the curved surface, the reality is lost. According to the embodiment of the disclosure, when the shadow is planned to be cast on different real objects, the display form of the planned shadow is corrected according to the surface parameter information of the real objects, for example, when the surface of the real object is a curved surface, the display form of the planned shadow is corrected, so that the planned shadow is more suitable for the real scene when being displayed on the curved surface. The method of the invention makes the shadow of the virtual object have stronger reality sense when being displayed in the real scene, thereby obviously improving the display effect of the shadow and bringing better experience to users.
According to an embodiment of the present disclosure, the processing module 940 includes a first determining unit and a deforming unit. The first determining unit is used for determining the geometric shape of the surface according to the parameter information of the surface; and the deformation unit is used for deforming the first determined shadow corresponding to the determined shadow information according to the geometric shape of the surface to obtain a second determined shadow so as to determine the shadow of the virtual object in the real scene.
According to the embodiment of the invention, because different objects have different geometric shapes, the first determined shadow is deformed according to the geometric shapes of the objects, so that the projected shadow is more consistent with a real projection effect, and the problem that the projected shadow can only be projected on a plane but not on a curved surface when the shadow is projected in an augmented reality scene is solved.
According to an embodiment of the present disclosure, the processing module 940 further includes a second determining unit, a first setting unit, or a second setting unit. The second determining unit is used for determining the material of the surface according to the parameter information of the surface; the first setting unit is used for carrying out color setting on the second drawn shadow according to the material of the surface so as to determine the shadow of the virtual object in the real scene; or the second setting unit is used for setting the color of the first fitted shadow according to the material of the surface to obtain a third fitted shadow, and the second fitted shadow and the third fitted shadow are combined to determine the shadow of the virtual object in the real scene.
Through the embodiment of the disclosure, because the materials of different objects are different, the color of the shadow is adjusted according to the color of the shadow determined by the material information, so that the shadow cast can be matched with the material of the object bearing the shadow, and the display effect of the cast shadow is improved.
According to an embodiment of the present disclosure, the obtaining module 910 includes a third determining unit, a calculating unit, and a fourth determining unit. The third determining unit is used for determining main light source information in a real scene; the computing unit is used for pre-computing the projection direction and the display area of the shadow of the virtual object in the real scene according to the main light source information; and the fourth determining unit is used for determining the proposed shadow information according to the pre-calculation result of the shadow of the virtual object in the real scene.
According to the embodiment of the disclosure, the shadow of the virtual object in the real scene can be preliminarily determined according to the main light source information, so that the shadow of the virtual object to be projected basically conforms to the real scene, and a method for accurately determining the shadow is provided.
According to an embodiment of the present disclosure, the third determining unit includes an obtaining subunit and a matching subunit. The acquisition subunit is used for acquiring a dynamic scene graph obtained by shooting a real scene through a camera; and the matching subunit is used for matching the dynamic scene graph by using a computer vision algorithm so as to determine the main light source information in the real scene.
According to the embodiment of the disclosure, the shot dynamic scene graph is matched through a computer vision algorithm, and the method for accurately determining the main light source information in the real scene is provided.
According to an embodiment of the present disclosure, the analysis module 920 includes an analysis unit, a fifth determination unit, and a sixth determination unit. The analysis unit is used for analyzing the real scene and determining a local scene around the virtual object; the fifth determining unit is used for determining a first real object contained in the local scene; and a sixth determining unit for taking the first real object as a real object for carrying a shadow of the virtual object.
According to the embodiment of the disclosure, in the real scene, the real object in the local scene around the virtual object can be used for bearing the shadow of the virtual object, and the real object in the local scene is analyzed, so that the calculation amount of the system in shadow casting is reduced, and the casting efficiency is improved.
According to an embodiment of the present disclosure, the virtual object processing apparatus 900 further includes a determining module for determining whether the first real object is in a casting direction of the shadow of the virtual object before the first real object is used as the real object for bearing the shadow of the virtual object, wherein if the first real object is in the casting direction of the shadow of the virtual object, the first real object is used as the real object for bearing the shadow of the virtual object.
By the embodiment of the disclosure, whether the real object is in the casting direction of the shadow or not can be judged, so that the determined real object cannot bear the shadow of the virtual object, the shadow is prevented from being wrongly cast on the real object in the non-casting direction, and the casting reality is improved.
The virtual object processing apparatus 900 can be used to implement the methods shown with reference to fig. 2 to 8.
Exemplary Medium
Having described the method of the exemplary embodiment of the present invention, reference is next made to fig. 10 for a medium of the exemplary embodiment of the present invention storing computer-executable instructions for implementing the virtual object processing method of fig. 2 to 8 when the instructions are executed by a processing unit.
In some possible embodiments, aspects of the present invention may also be implemented in the form of a program product including program code for causing a computing device to perform the steps in the virtual object processing method according to various exemplary embodiments of the present invention described in the above section "exemplary method" of this specification, when the program product is run on the computing device, for example, the computing device may perform operation S210 as shown in fig. 2: obtaining proposed shadow information of a virtual object for an augmented reality scene; operation S220: analyzing the real scene to determine a real object in the real scene for bearing a shadow of the virtual object; operation S230: determining parameter information of a surface of the real object for receiving a shadow of the virtual object; and operation S240 of processing the proposed shadow information according to the parameter information of the surface to determine a shadow of the virtual object in the real scene.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
As shown in fig. 10, a program product 100 for virtual object processing according to an embodiment of the present invention is depicted, which may employ a portable compact disc read only memory (CD-ROM) and include program code, and may be run on a computing device, such as a personal computer. However, the program product of the present invention is not limited in this regard and, in the present document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A readable signal medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
Exemplary computing device
Having described the methods, media, and apparatus of the exemplary embodiments of this invention, reference is next made to fig. 11, which illustrates a computing device of an exemplary embodiment of this invention that includes a processing unit and a storage unit, the storage unit storing computer-executable instructions that, when executed by the processing unit, implement the virtual object processing methods of fig. 2-8.
The embodiment of the invention also provides the computing equipment. As will be appreciated by one skilled in the art, aspects of the present invention may be embodied as a system, method or program product. Thus, various aspects of the invention may be embodied in the form of: an entirely hardware embodiment, an entirely software embodiment (including firmware, microcode, etc.) or an embodiment combining hardware and software aspects that may all generally be referred to herein as a "circuit," module "or" system.
In some possible embodiments, a computing device according to the present invention may include at least one processing unit, and at least one memory unit. Wherein the storage unit stores program code which, when executed by the processing unit, causes the processing unit to perform the steps in the information presentation methods according to various exemplary embodiments of the present invention described in the above section "exemplary methods" of this specification. For example, the processing unit may perform operation S210 as shown in fig. 2: obtaining proposed shadow information of a virtual object for an augmented reality scene; operation S220: analyzing the real scene to determine a real object in the real scene for bearing a shadow of the virtual object; operation S230: determining parameter information of a surface of the real object for receiving a shadow of the virtual object; and operation S240 of processing the proposed shadow information according to the parameter information of the surface to determine a shadow of the virtual object in the real scene.
A computing device 110 for virtual object processing according to this embodiment of the invention is described below with reference to fig. 11. The computing device 110 shown in FIG. 11 is only one example and should not impose any limitations on the functionality or scope of use of embodiments of the present invention.
As shown in fig. 11, computing device 110 is embodied in the form of a general purpose computing device. Components of computing device 110 may include, but are not limited to: the at least one processing unit 1101, the at least one memory unit 1102, and a bus 1103 connecting different system components (including the memory unit 1102 and the processing unit 1101).
Bus 1103 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures.
The storage unit 1102 may include readable media in the form of volatile memory, such as Random Access Memory (RAM)11021 and/or cache memory 11022, and may further include Read Only Memory (ROM) 11023.
Storage unit 1102 may also include a program/utility 110211 having a set (at least one) of program modules 11024, such program modules 11024 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment.
Computing device 110 may also communicate with one or more external devices 1104 (e.g., keyboard, pointing device, bluetooth device, etc.), with one or more devices that enable a user to interact with computing device 110, and/or with any devices (e.g., router, modem, etc.) that enable computing device 110 to communicate with one or more other computing devices. Such communication may occur through an input/output (I/O) interface 11011. Also, the computing device 110 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the internet) via the network adapter 1106. As shown, the network adapter 1106 communicates with other modules of the computing device 110 over the bus 1103. It should be appreciated that although not shown in the figures, other hardware and/or software modules may be used in conjunction with the computing device 110, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
It should be noted that although in the above detailed description several units/modules or sub-units/modules of the apparatus are mentioned, such a division is merely exemplary and not mandatory. Indeed, the features and functionality of two or more of the units/modules described above may be embodied in one unit/module according to embodiments of the invention. Conversely, the features and functions of one unit/module described above may be further divided into embodiments by a plurality of units/modules.
Moreover, while the operations of the method of the invention are depicted in the drawings in a particular order, this does not require or imply that the operations must be performed in this particular order, or that all of the illustrated operations must be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions.
While the spirit and principles of the invention have been described with reference to several particular embodiments, it is to be understood that the invention is not limited to the disclosed embodiments, nor is the division of aspects, which is for convenience only as the features in such aspects may not be combined to benefit. The invention is intended to cover various modifications and equivalent arrangements included within the spirit and scope of the appended claims.

Claims (12)

1. A virtual object processing method, comprising:
obtaining proposed shadow information of a virtual object for an augmented reality scene;
analyzing the real scene to determine a real object in the real scene for carrying a shadow of the virtual object;
determining parameter information of a surface of the real object for receiving a shadow of the virtual object; and
processing the proposed shadow information according to the parameter information of the surface to determine the shadow of the virtual object in the real scene;
wherein processing the proposed shadow information according to the parametric information of the surface to determine the shadow of the virtual object in the real scene comprises:
determining the geometric shape of the surface according to the parameter information of the surface; and
deforming a first proposed shadow corresponding to the proposed shadow information according to the geometric shape of the surface to obtain a second proposed shadow so as to determine the shadow of the virtual object in the real scene; and
determining the material of the surface according to the parameter information of the surface;
performing color setting on the second proposed shadow according to the material of the surface to determine the shadow of the virtual object in the real scene; or
And carrying out color setting on the first fitted shadow according to the material of the surface to obtain a third fitted shadow, and combining the second fitted shadow and the third fitted shadow to determine the shadow of the virtual object in the real scene.
2. The method of claim 1, wherein obtaining proposed shadow information for a virtual object of an augmented reality scene comprises:
determining primary light source information in the real scene;
pre-calculating the projection orientation and the display area of the shadow of the virtual object in the real scene according to the main light source information; and
and determining the proposed shadow information according to the pre-calculation result of the shadow of the virtual object in the real scene.
3. The method of claim 2, wherein determining primary light source information in the real scene comprises:
acquiring a dynamic scene graph obtained by shooting the real scene through a camera; and
and matching the dynamic scene graph by utilizing a computer vision algorithm to determine the main light source information in the real scene.
4. The method of claim 1, wherein analyzing the real scene to determine real objects in the real scene that carry shadows of the virtual objects comprises:
analyzing the real scene and determining a local scene around the virtual object;
determining a first real object contained in the local scene; and
using the first real object as the real object for carrying the shadow of the virtual object.
5. The method of claim 4, wherein prior to treating the first real object as the real object for carrying shadows of the virtual object, the method further comprises:
determining whether the first real object is in a casting direction of a shadow of the virtual object,
wherein if the first real object is in a casting direction of the shadow of the virtual object, the first real object is taken as the real object for carrying the shadow of the virtual object.
6. A virtual object processing apparatus, comprising:
the system comprises an acquisition module, a processing module and a display module, wherein the acquisition module is used for acquiring formulated shadow information of a virtual object used for an augmented reality scene;
an analysis module for analyzing the real scene to determine a real object in the real scene for carrying a shadow of the virtual object;
a determining module for determining parameter information of a surface of the real object for receiving the shadow of the virtual object; and
the processing module is used for processing the formulated shadow information according to the parameter information of the surface so as to determine the shadow of the virtual object in the real scene;
wherein the processing module comprises:
a first determining unit, configured to determine a geometric shape of the surface according to parameter information of the surface; and
a deformation unit, configured to deform a first proposed shadow corresponding to the proposed shadow information according to a geometric shape of the surface to obtain a second proposed shadow, so as to determine a shadow of the virtual object in the real scene;
the second determining unit is used for determining the material of the surface according to the parameter information of the surface;
a first setting unit, configured to perform color setting on the second proposed shadow according to a material of the surface to determine a shadow of the virtual object in the real scene; or
And the second setting unit is used for carrying out color setting on the first sketched shadow according to the material of the surface to obtain a third sketched shadow so as to combine the second sketched shadow and the third sketched shadow to determine the shadow of the virtual object in the real scene.
7. The apparatus of claim 6, wherein the means for obtaining comprises:
a third determining unit, configured to determine main light source information in the real scene;
the computing unit is used for pre-computing the projection direction and the display area of the shadow of the virtual object in the real scene according to the main light source information; and
and the fourth determining unit is used for determining the proposed shadow information according to the pre-calculation result of the shadow of the virtual object in the real scene.
8. The apparatus of claim 7, wherein the third determining unit comprises:
the acquiring subunit is used for acquiring a dynamic scene graph obtained by shooting the real scene through a camera; and
and the matching subunit is used for matching the dynamic scene graph by utilizing a computer vision algorithm so as to determine the main light source information in the real scene.
9. The apparatus of claim 6, wherein the analysis module comprises:
the analysis unit is used for analyzing the real scene and determining a local scene around the virtual object;
a fifth determining unit, configured to determine a first real object included in the local scene; and
a sixth determining unit, configured to use the first real object as the real object for bearing the shadow of the virtual object.
10. The apparatus of claim 9, wherein the apparatus further comprises:
a determining module, configured to determine whether the first real object is in a casting direction of the shadow of the virtual object before the first real object is used as the real object for bearing the shadow of the virtual object,
wherein if the first real object is in a casting direction of the shadow of the virtual object, the first real object is taken as the real object for carrying the shadow of the virtual object.
11. A medium storing computer executable instructions for implementing the virtual object processing method of any one of claims 1 to 5 when executed by a processing unit.
12. A computing device, comprising:
a processing unit; and
a storage unit storing computer-executable instructions for implementing the virtual object processing method of any one of claims 1 to 5 when executed by the processing unit.
CN201711219793.6A 2017-11-28 2017-11-28 Virtual object processing method, virtual object processing apparatus, medium, and computing device Active CN108010118B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711219793.6A CN108010118B (en) 2017-11-28 2017-11-28 Virtual object processing method, virtual object processing apparatus, medium, and computing device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711219793.6A CN108010118B (en) 2017-11-28 2017-11-28 Virtual object processing method, virtual object processing apparatus, medium, and computing device

Publications (2)

Publication Number Publication Date
CN108010118A CN108010118A (en) 2018-05-08
CN108010118B true CN108010118B (en) 2021-11-30

Family

ID=62054531

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711219793.6A Active CN108010118B (en) 2017-11-28 2017-11-28 Virtual object processing method, virtual object processing apparatus, medium, and computing device

Country Status (1)

Country Link
CN (1) CN108010118B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI669682B (en) * 2018-05-25 2019-08-21 光寶電子(廣州)有限公司 Image processing system and image processing method
WO2020019132A1 (en) * 2018-07-23 2020-01-30 太平洋未来科技(深圳)有限公司 Method and apparatus for rendering virtual object on the basis of light information, and electronic device
WO2020019133A1 (en) * 2018-07-23 2020-01-30 太平洋未来科技(深圳)有限公司 Method and device for determining shadow effect and electronic device
CN111340931A (en) * 2020-02-17 2020-06-26 广州虎牙科技有限公司 Scene processing method and device, user side and storage medium
CN111462295B (en) * 2020-03-27 2023-09-05 咪咕文化科技有限公司 Shadow processing method, device and storage medium in augmented reality shooting
CN111476877B (en) * 2020-04-16 2024-01-26 网易(杭州)网络有限公司 Shadow rendering method and device, electronic equipment and storage medium
CN111833283B (en) * 2020-06-23 2024-02-23 维沃移动通信有限公司 Data processing method and device and electronic equipment
CN111815750A (en) * 2020-06-30 2020-10-23 深圳市商汤科技有限公司 Method and device for polishing image, electronic equipment and storage medium
CN113379884B (en) * 2021-07-05 2023-11-17 北京百度网讯科技有限公司 Map rendering method, map rendering device, electronic device, storage medium and vehicle
CN116402980A (en) * 2021-12-28 2023-07-07 北京字跳网络技术有限公司 Virtual fluff generation method, device, equipment, medium and product

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009040093A1 (en) * 2007-09-25 2009-04-02 Metaio Gmbh Method and device for illustrating a virtual object in a real environment
CN101663692A (en) * 2007-03-01 2010-03-03 弗罗斯特普斯私人有限公司 Method of creation of a virtual three dimensional image to enable its reproduction on planar substrates
RU2433487C2 (en) * 2009-08-04 2011-11-10 Леонид Михайлович Файнштейн Method of projecting image on surfaces of real objects
WO2016019014A1 (en) * 2014-07-29 2016-02-04 LiveLocation, Inc. 3d-mapped video projection based on on-set camera positioning
EP3166078A1 (en) * 2015-11-06 2017-05-10 Samsung Electronics Co., Ltd. 3d graphic rendering method and apparatus
CN107025683A (en) * 2017-03-30 2017-08-08 联想(北京)有限公司 A kind of information processing method and electronic equipment
CN107330964A (en) * 2017-07-24 2017-11-07 广东工业大学 A kind of display methods and system of complex three-dimensional object

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4948218B2 (en) * 2007-03-22 2012-06-06 キヤノン株式会社 Image processing apparatus and control method thereof
US9652892B2 (en) * 2013-10-29 2017-05-16 Microsoft Technology Licensing, Llc Mixed reality spotlight
CN105844714A (en) * 2016-04-12 2016-08-10 广州凡拓数字创意科技股份有限公司 Augmented reality based scenario display method and system
CN107393017A (en) * 2017-08-11 2017-11-24 北京铂石空间科技有限公司 Image processing method, device, electronic equipment and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101663692A (en) * 2007-03-01 2010-03-03 弗罗斯特普斯私人有限公司 Method of creation of a virtual three dimensional image to enable its reproduction on planar substrates
WO2009040093A1 (en) * 2007-09-25 2009-04-02 Metaio Gmbh Method and device for illustrating a virtual object in a real environment
RU2433487C2 (en) * 2009-08-04 2011-11-10 Леонид Михайлович Файнштейн Method of projecting image on surfaces of real objects
WO2016019014A1 (en) * 2014-07-29 2016-02-04 LiveLocation, Inc. 3d-mapped video projection based on on-set camera positioning
EP3166078A1 (en) * 2015-11-06 2017-05-10 Samsung Electronics Co., Ltd. 3d graphic rendering method and apparatus
CN106683199A (en) * 2015-11-06 2017-05-17 三星电子株式会社 3D graphic rendering method and apparatus
CN107025683A (en) * 2017-03-30 2017-08-08 联想(北京)有限公司 A kind of information processing method and electronic equipment
CN107330964A (en) * 2017-07-24 2017-11-07 广东工业大学 A kind of display methods and system of complex three-dimensional object

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Shadow Rendering for Mesostructure Surface Based on Height Gradient Map;Ma Zhiqiang 等;《2010 3rd International Conference on Computer Science and Information Technology》;20101231;第110-114页 *
移动增强现实系统的关键技术研究;林倞 等;《中国图象图形学报》;20090331;第14卷(第3期);第560-564页 *

Also Published As

Publication number Publication date
CN108010118A (en) 2018-05-08

Similar Documents

Publication Publication Date Title
CN108010118B (en) Virtual object processing method, virtual object processing apparatus, medium, and computing device
US11132543B2 (en) Unconstrained appearance-based gaze estimation
US9082213B2 (en) Image processing apparatus for combining real object and virtual object and processing method therefor
KR101885090B1 (en) Image processing apparatus, apparatus and method for lighting processing
EP3561777B1 (en) Method and apparatus for processing a 3d scene
JP2013050947A (en) Method for object pose estimation, apparatus for object pose estimation, method for object estimation pose refinement and computer readable medium
CN114820906B (en) Image rendering method and device, electronic equipment and storage medium
CN110503711B (en) Method and device for rendering virtual object in augmented reality
US20170064284A1 (en) Producing three-dimensional representation based on images of a person
US20220005257A1 (en) Adaptive ray tracing suitable for shadow rendering
US11823321B2 (en) Denoising techniques suitable for recurrent blurs
CN112562056A (en) Control method, device, medium and equipment for virtual light in virtual studio
CN114638950A (en) Method and equipment for drawing virtual object shadow
CN110084873B (en) Method and apparatus for rendering three-dimensional model
CN112734896A (en) Environment shielding rendering method and device, storage medium and electronic equipment
Alhakamy et al. Real-time illumination and visual coherence for photorealistic augmented/mixed reality
EP4027302A1 (en) Light importance caching using spatial hashing in real-time ray tracing applications
CN111696163A (en) Synthetic infrared image generation for gaze estimation machine learning
US10212406B2 (en) Image generation of a three-dimensional scene using multiple focal lengths
US20230351555A1 (en) Using intrinsic functions for shadow denoising in ray tracing applications
CN110310336B (en) Touch projection system and image processing method
CN109166176B (en) Three-dimensional face image generation method and device
US11836844B2 (en) Motion vector optimization for multiple refractive and reflective interfaces
CN112819929B (en) Water surface rendering method and device, electronic equipment and storage medium
US20210049806A1 (en) Ray-tracing for auto exposure

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20190618

Address after: 311200 Room 102, 6 Blocks, C District, Qianjiang Century Park, Xiaoshan District, Hangzhou City, Zhejiang Province

Applicant after: Hangzhou Yixian Advanced Technology Co., Ltd.

Address before: 310052 Building No. 599, Changhe Street Network Business Road, Binjiang District, Hangzhou City, Zhejiang Province, 4, 7 stories

Applicant before: NetEase (Hangzhou) Network Co., Ltd.

GR01 Patent grant
GR01 Patent grant