CN114663632A - Method and equipment for displaying virtual object by illumination based on spatial position - Google Patents

Method and equipment for displaying virtual object by illumination based on spatial position Download PDF

Info

Publication number
CN114663632A
CN114663632A CN202210206906.3A CN202210206906A CN114663632A CN 114663632 A CN114663632 A CN 114663632A CN 202210206906 A CN202210206906 A CN 202210206906A CN 114663632 A CN114663632 A CN 114663632A
Authority
CN
China
Prior art keywords
virtual object
point cloud
cloud data
illumination
color information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210206906.3A
Other languages
Chinese (zh)
Inventor
孟亚州
黄萌瑶
郝冬宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hisense Visual Technology Co Ltd
Original Assignee
Hisense Visual Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hisense Visual Technology Co Ltd filed Critical Hisense Visual Technology Co Ltd
Priority to CN202210206906.3A priority Critical patent/CN114663632A/en
Publication of CN114663632A publication Critical patent/CN114663632A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Geometry (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application relates to the technical field of AR, and provides a method and equipment for displaying a virtual object by illumination based on a space position, when the virtual object is placed on a real picture, a three-dimensional point cloud data set corresponding to the real picture is constructed, two-dimensional coordinates of the virtual object at a target position to be placed are converted into three-dimensional coordinates, a target point cloud data set contained in a cuboid with the three-dimensional coordinates as the center and the preset length, width and height as the sides is obtained from the three-dimensional point cloud data set, an illumination information graph is determined according to the target point cloud data set, the illumination information graph can accurately reflect the illumination information of the target position in the real environment, furthermore, the virtual object placed at the target position is rendered according to the illumination information graph, so that the surface illumination of the virtual object is consistent with the real environment, the virtual object is displayed on the real picture in a real superposition manner, and the reality of virtual fusion is improved, and then the AR experience is promoted.

Description

Method and equipment for displaying virtual object by illumination based on spatial position
Technical Field
The present disclosure relates to the field of Augmented Reality (AR) technologies, and in particular, to a method and an apparatus for displaying a virtual object by illumination based on a spatial location.
Background
The AR technology is a new technology developed on the basis of virtual reality, and is a technology for increasing the perception of a user to the real world through information provided by a computer system, and superimposes virtual objects, virtual scenes, system prompt information, non-geometric information about the real objects, and the like generated by the computer system on the real scenes, thereby realizing "augmentation" of the real world.
In the AR experience, a user has a subtle feeling of illumination, and generally, illumination consistency is used as an important index of virtual-real fusion, and the illumination consistency is to make a virtual object have the same illumination effect as a real scene. The aim of the illumination consistency is to make the illumination condition of the virtual object consistent with the illumination condition in the real scene, namely the virtual object and the real object have consistent light, shade effects so as to enhance the reality of the virtual object.
At present, most of related technologies adopt a deep learning model to estimate illumination information based on an input RGB image, and because the RGB image is two-dimensional, the method cannot accurately estimate the illumination information in a 3D space.
Disclosure of Invention
The embodiment of the application provides a method and equipment for displaying a virtual object by illumination based on a spatial position, which are used for improving the illumination consistency of the virtual object and a real environment.
In a first aspect, an embodiment of the present application provides a method for displaying a virtual object by illumination based on a spatial position, which is applied to an AR scene, and includes:
responding to the operation of placing a virtual object on a real picture, and constructing a three-dimensional point cloud data set corresponding to the real picture by adopting a synchronous positioning and mapping SLAM technology;
determining three-dimensional coordinates of a target position where the virtual object is placed in a three-dimensional space;
acquiring a target point cloud data set contained in a cuboid which takes the three-dimensional coordinate as a center and has preset length, width and height as sides from the three-dimensional point cloud data set;
determining an illumination information graph corresponding to the target position according to the target point cloud data set;
and rendering the virtual object placed at the target position according to the illumination information graph, and displaying the virtual object on the real picture in an overlapping manner.
In a second aspect, an embodiment of the present application provides a display device, which supports an Augmented Reality (AR) function, and includes a processor, a memory, a camera, and a display screen, where the display screen, the camera, the memory, and the processor are connected by a bus;
the camera is used for collecting real pictures;
the memory stores a computer program, and the processor performs the following operations according to the computer program:
responding to the operation of placing a virtual object on a real picture displayed by the display screen, and constructing a three-dimensional point cloud data set corresponding to the real picture by adopting a synchronous positioning and mapping SLAM technology;
determining three-dimensional coordinates of a target position where the virtual object is placed in a three-dimensional space;
acquiring a target point cloud data set contained in a cuboid which takes the three-dimensional coordinate as a center and has preset length, width and height as sides from the three-dimensional point cloud data set;
determining an illumination information graph corresponding to the target position according to the target point cloud data set;
and rendering the virtual object placed at the target position according to the illumination information graph, and displaying the virtual object on the real picture in an overlapping manner through the display screen.
In a third aspect, embodiments of the present application provide a computer-readable storage medium storing computer-executable instructions for causing a computer to perform a method for displaying a virtual object based on illumination of a spatial location.
In the above embodiments of the present application, when placing a virtual object on a real screen, determining a three-dimensional coordinate of a target position to be placed by the virtual object in a three-dimensional space, and obtaining a target point cloud dataset from the three-dimensional point cloud dataset corresponding to the real screen, where the target point cloud dataset includes a cuboid with the three-dimensional coordinate as a center and a predetermined length, width, and height as sides, and describing the real environment in a point cloud manner so that each point cloud dataset includes the three-dimensional coordinate, color information, and intensity information of a corresponding point on the real object, so as to generate an illumination information map corresponding to the target position to be placed by using the target point cloud dataset, where the illumination information map can accurately reflect illumination information of the real environment, and further, rendering the virtual object placed at the target position according to the illumination information map so that surface illumination of the virtual object is consistent with the real environment, therefore, the virtual object is truly superposed and displayed on the real picture, the reality of virtual fusion is improved, and the AR experience is further improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to these drawings without inventive exercise.
Fig. 1 schematically illustrates a lighting effect diagram of different positions in a real environment provided by an embodiment of the present application;
FIG. 2 is a flowchart illustrating a method for displaying a virtual object based on spatial location illumination according to an embodiment of the present application;
FIG. 3 is a schematic diagram illustrating an interface for placing a virtual object according to an embodiment of the present application;
fig. 4 is a flowchart illustrating a method for determining an illumination information map according to three-dimensional point cloud data according to an embodiment of the present application;
FIG. 5 is a flowchart illustrating a method for rendering a virtual object according to an embodiment of the present application;
FIG. 6 is a diagram illustrating an AR effect provided by an embodiment of the present application;
fig. 7 is a block diagram schematically illustrating a display device provided in an embodiment of the present application.
Detailed Description
To make the objects, embodiments and advantages of the present application clearer, the following description of exemplary embodiments of the present application will clearly and completely describe the exemplary embodiments of the present application with reference to the accompanying drawings in the exemplary embodiments of the present application, and it is to be understood that the described exemplary embodiments are only a part of the embodiments of the present application, and not all of the embodiments.
All other embodiments, which can be derived by a person skilled in the art from the exemplary embodiments described herein without inventive step, are intended to be within the scope of the claims appended hereto. In addition, while the disclosure herein has been presented in terms of one or more exemplary examples, it should be appreciated that aspects of the disclosure may be implemented solely as a complete embodiment.
In an AR scene, extracting ambient lighting becomes especially important in order to actually place a virtual object of a simulation on a real picture. The virtual object should have a consistent illumination effect with the real environment, which requires that the AR device has a sensing capability of ambient illumination, i.e., in a darker environment, the surface brightness of the virtual object should be darker; in a bright environment, the surface brightness of the virtual object should be brighter; when the virtual object has a specular reflection material, the surface of the virtual object can reflect the picture in the real scene.
In a real environment, illumination information of an object at different positions is different. As shown in fig. 1, the position a is located above the desktop, the lighting effect will be brighter, and the position B is located below the desktop, the lighting effect will be darker. In order to make the virtual objects in the AR scene have a more realistic AR fusion effect, the virtual objects are required to have different illumination information at different positions.
For example, when the virtual object is placed at the position a, the surface of the virtual object is slightly bright, and when the virtual object is made of a specular reflection material, the reflection effect at the position a should be reflected; when the virtual object is placed at the position B, the surface of the virtual object is darker, and when the virtual object is made of a specular reflection material, the reflection effect at the position B is reflected.
According to the method and the device for displaying the virtual object based on the illumination of the spatial position, the illumination effect of the virtual object can be changed along with different placement positions. Specifically, when a user places a virtual object, illumination information of the placement position of the virtual object is determined according to point cloud data of the virtual object in a three-dimensional space, the illumination information can enable the virtual object to have correct brightness change, and when the surface of the virtual object is made of a specular reflection material, a correct reflection effect is achieved; furthermore, based on the determined illumination information, the virtual object is really superposed in the real picture, so that the reality of virtual-real fusion is improved, and the AR experience of the user is improved.
Referring to fig. 2, a flowchart of a method for displaying a virtual object based on illumination of a spatial location provided in an embodiment of the present application is executed by a display device with an AR support function, including but not limited to a display device such as a smart phone, a tablet, a laptop, a smart tv, a desktop, a vehicle-mounted device, and a wearable device, and the process mainly includes the following steps:
s201: and responding to the operation of placing the virtual object on the real picture, and constructing three-dimensional point cloud data corresponding to the real picture by adopting an SLAM technology.
In an optional implementation manner, taking the display device as a flat panel as an example, as shown in fig. 3, a user opens an AR application, the AR application starts a camera to obtain a color picture of a real environment and displays the color picture, in the display process, it is detected whether the user touches or clicks a display screen, when it is detected that the user touches or clicks the display screen, the touched or clicked position is taken as a target position for placing a virtual object in the real picture, and prompt information is popped up to inquire whether the user places the virtual object. When the user clicks the "yes" option, the AR application adopts SLAM (simultaneous Localization and Mapping) technology to construct a three-dimensional point cloud dataset corresponding to the real picture, where each point cloud data in the three-dimensional point cloud dataset includes three-dimensional coordinates, color information, and intensity information of a corresponding point on the real object in a three-dimensional space.
S202: three-dimensional coordinates of a target position where the virtual object is placed in a three-dimensional space are determined.
In the embodiment of the application, a real picture acquired by a camera of a display device is displayed through a two-dimensional display screen, a user selects a target position of a virtual object placed on the real picture by touching or clicking the display screen, and the display device determines a two-dimensional coordinate of the target position on the display screen according to a touch or click result. In addition, in the process of displaying the real picture by the display device, the AR application establishes a virtual screen corresponding to the display screen in the three-dimensional space. And after the two-dimensional coordinates are determined, a ray is emitted along the direction vertical to the display screen by taking the two-dimensional coordinates as a starting point, and the three-dimensional coordinates of the intersection point of the ray and the virtual screen are determined as the three-dimensional coordinates of the target position.
S203: and acquiring a target point cloud data set contained in a cuboid which takes a three-dimensional coordinate as a center and takes a preset length, width and height as sides from the three-dimensional point cloud data set.
Generally, the three-dimensional point cloud data set includes all regions of a real picture, and a virtual object is only placed in a local region of the real picture, so that a cuboid can be constructed by taking a three-dimensional coordinate as a center and taking a preset length, a preset width and a preset height as sides. The preset length, width and height can be set according to empirical values or virtual scenes, wherein the smaller the volume of the cuboid is, the smaller the calculation time for estimating illumination is. Optionally, the preset length, width and height are greater than those of the virtual object. Further, point cloud data contained in the cuboid is obtained from the point cloud data set of the real picture, so that point cloud of real objects around the virtual object is obtained, and accuracy of illumination estimation is improved.
S204: and determining an illumination information graph corresponding to the target position according to the target point cloud data set.
In S204, each target point cloud data in the target point cloud data set includes three-dimensional coordinates, color information, and intensity information of corresponding points on a real object around the virtual object to be placed in a three-dimensional space, so that the ambient illumination of the target position can be accurately estimated, and the illumination consistency between the virtual object placed at the target position and the real environment is improved. The specific estimation process of the illumination information is shown in fig. 4:
s2041: at least one surface contained in the target point cloud data set is determined, and a surface set is obtained.
In an alternative embodiment, a Random Sample Consensus (RANSAC) algorithm is used to fit the target point cloud dataset and generate a surface set from at least one of the fitted surfaces. The fitting surface comprises a plane or a curved surface, and the corresponding fitting equations of different types of surfaces are different.
It should be noted that the method for obtaining the surface set further includes an implicit function method or a triangulation method, and the present application does not require any limitation.
S2042: and uniformly dividing a plurality of directional rays in a three-dimensional space by taking the three-dimensional coordinates of the target position as an origin.
In S2042, a reference line is selected with the three-dimensional coordinate of the target position as the origin, and the 360 ° three-dimensional space is uniformly divided into a plurality of directional rays at a fixed angle, wherein the more directional rays are divided, the richer the illumination information is, and the truer the illumination effect on the surface of the virtual object is.
S2043: and determining whether the direction ray intersects with the target point cloud data closest to the origin in the cuboid or not for each direction ray, if so, executing S2044, and otherwise, executing S2045.
In the embodiment of the application, the origin is a target position where a virtual object is placed, and the target point cloud data in the cuboid is a corresponding point on the real object, and includes three-dimensional coordinates, color information and intensity information of the point. According to the three-dimensional coordinates of the target position and the three-dimensional coordinates of the target point cloud data in the cuboid, the target point cloud data closest to the origin can be determined, the target point cloud data can accurately reflect the illumination information of the target position, further, whether the directional ray intersects with the target point cloud data closest to the origin or not is determined, and the illumination information corresponding to the target position is determined according to the intersection result.
S2044: and taking the color information of the point cloud data closest to the origin as the color information of the subsphere corresponding to the central angle of the directional ray.
In the embodiment of the application, a 360-degree three-dimensional space is divided by a plurality of directional rays, the sizes of the spherical center angles corresponding to the directional rays are the same, the central angles are 360 degrees/N, and N is the number of the directional rays. When the direction ray intersects with the target point cloud data closest to the origin, it indicates that the illumination information of the target position can be estimated by using the target point cloud data closest to the origin, and therefore, the color information of the point cloud data closest to the origin can be used as the color information of the sub-spherical surface corresponding to the central angle of the direction ray.
S2045: it is determined whether the directional ray intersects a surface in the set of surfaces, if so, S2046 is performed, otherwise, S2047 is performed.
When the directional ray does not intersect the target point cloud data closest to the origin, it may be determined whether the directional ray intersects one of the surfaces in the set of surfaces, thereby estimating illumination information of the target location from the target point cloud data on the surface.
S2046: and interpolating the color information of the target point cloud data in the preset range on the surface and the color information of the intersecting point cloud data, and taking the color information after interpolation as the color information of the subsphere corresponding to the spherical center angle of the directional ray.
When the directional ray intersects one of the surfaces in the surface set, an alternative embodiment is: and interpolating the color information of the target point cloud data adjacent to the intersecting point cloud data on the surface with the color information of the intersecting point cloud data, and taking the color information after interpolation as the color information of the subsphere corresponding to the central angle of the directional ray.
S2047: and interpolating the color information of the subspheres adjacent to the remaining subspheres without color values on the spherical surface to obtain the color information of the remaining subspheres.
In the embodiment of the present application, when the directional ray does not intersect with the target point cloud data closest to the origin in the rectangular parallelepiped and does not intersect with the surface in the surface set, the subsphere corresponding to the central angle of the directional ray is not assigned with color information, and the subsphere to which no color information is assigned is referred to as a remaining subsphere. And for the residual subspheres, interpolating the color information of the subspheres adjacent to the residual subspheres, and taking the color information after interpolation as the color information of the residual subspheres.
S2048: and obtaining a panoramic image corresponding to a spherical surface with a three-dimensional coordinate as a sphere center and a preset length as a radius according to the color information of each subsphere determined by the rays in the multiple directions, and taking the panoramic image as an illumination information image corresponding to the target position.
In the embodiment of the application, each subsphere forms a complete sphere with a three-dimensional coordinate as a sphere center and a preset length as a radius, and the subspheres corresponding to rays in each direction are endowed with color information, so that a panoramic image corresponding to the sphere is obtained, and the panoramic image is used as an illumination information image corresponding to a target position to estimate illumination information of the target position in each direction and improve the illumination effect of a virtual object placed in the target position.
S205: and rendering the virtual object placed at the target position according to the illumination information graph, and displaying the virtual object on the real picture in an overlapping manner.
The rendering process of the virtual object is shown in fig. 5:
s2051: and taking the illumination information graph as a sky box surrounding the virtual object, and setting the sky box to be invisible.
In the embodiment of the application, the Unity engine on the display device is configured in advance, and the sky box is set to be invisible by checking the invisible option so as to avoid blocking a real picture in the display device. The sky box comprises six faces, namely an upper face, a lower face, a left face, a right face, a front face and a rear face, the illumination information graph is mapped into the sky box surrounding the virtual object, and illumination textures of the virtual object in all directions on the target position can be obtained, so that shadows and reflection effects of the surface of the virtual object can be accurately determined, and the illumination reality is improved.
S2052: and rendering the virtual object placed at the target position according to the illumination texture of each direction of the sky box.
Generally, the shapes of virtual objects are irregular, the surface shapes of different virtual objects are different, and the sky box is a regular hexahedron and can surround the virtual objects, so that the sky box is used for rendering the virtual objects, illumination textures in the up-down direction, the left-right direction and the front-back direction can be suitable for the virtual objects in different shapes, and the compatibility of the virtual objects is improved.
When the method provided by the embodiment of the application is adopted to render and display the virtual object, the illumination information graph under the space position can be accurately estimated according to the space position where the virtual object is placed, the illumination information graph is set to be the sky box in the Unity engine so as to change the surface brightness of the virtual object, and when the virtual object is made of a specular reflection material, the picture on the illumination information graph can be reflected, so that the effect of reflecting the real environment is realized, the illumination consistency of the virtual object and the real environment is improved, the virtual and real are more real in a fusion manner, and the AR experience is favorably improved.
Referring to fig. 6, the AR effect graph obtained by the method according to the embodiment of the present application is shown, in which the teapot is an added virtual object, and the surface brightness and light shadow of the teapot are consistent with the illumination intensity of the real environment, so as to improve the reality of the picture.
Based on the same technical concept, the embodiment of the present application provides a display device, which supports an AR function, can implement the method steps of displaying a virtual object based on illumination of a spatial location in the above embodiments, and can achieve the same technical effect.
Referring to fig. 7, the display device includes a processor 701, a memory 702, a camera 703 and a display screen 704, wherein the display screen 704, the camera 703 and the memory 702 are connected to the processor 701 through a bus 705;
the camera 703 is used for acquiring a real picture;
the memory 702 stores a computer program, and the processor 701 performs the following operations according to the computer program stored in the memory 702:
in response to the operation of placing a virtual object on the real picture displayed by the display screen 704, constructing a three-dimensional point cloud data set corresponding to the real picture by adopting a synchronous positioning and mapping SLAM technology;
determining three-dimensional coordinates of a target position where the virtual object is placed in a three-dimensional space;
acquiring a target point cloud data set contained in a cuboid which takes the three-dimensional coordinate as a center and has preset length, width and height as sides from the three-dimensional point cloud data set;
determining an illumination information graph corresponding to the target position according to the target point cloud data set;
rendering the virtual object placed at the target position according to the illumination information map, and displaying the virtual object on the real picture in an overlapping manner through the display screen 704.
Optionally, the processor 701 determines, according to the target point cloud data set, an illumination information map corresponding to the target position, and specifically performs the following operations:
determining at least one surface contained in the target point cloud data set to obtain a surface set, wherein the surface in the surface set is a plane or a curved surface;
uniformly dividing a plurality of directional rays in the three-dimensional space by taking the three-dimensional coordinates as an origin;
for each directional ray, if the directional ray intersects with the target point cloud data which is closest to the origin in the cuboid, using the color information of the closest target point cloud data as the color information of a subsphere corresponding to the spherical center angle of the directional ray; or if the direction ray intersects with one surface in the surface set, interpolating color information of target point cloud data in a preset range on the surface and color information of the intersected point cloud data, and taking the color information after interpolation as color information of a subsphere corresponding to the central angle of the direction ray;
and obtaining a panoramic image corresponding to a spherical surface with the three-dimensional coordinate as a sphere center and a preset length as a radius according to the color information of each subsphere determined by the rays in the multiple directions, and taking the panoramic image as an illumination information image corresponding to the target position.
Optionally, when there are remaining sub-spherical surfaces without color information assigned thereto in the spherical surface, the processor 701 further performs the following operations:
and interpolating the color information of the subsphere surface adjacent to the residual subsphere surface on the spherical surface to obtain the color information of the residual subsphere surface.
Optionally, the processor 701 renders the virtual object placed at the target position according to the illumination information map, and specifically performs the following operations:
the illumination information graph is used as a sky box surrounding the virtual object, the sky box is set to be invisible so as to avoid blocking the real picture, and the sky box comprises illumination textures in the upper direction, the lower direction, the left direction, the right direction and the front direction;
and rendering the virtual object placed at the target position according to the illumination texture of each direction of the sky box.
Optionally, the processor 701 determines a three-dimensional coordinate of a target position where the virtual object is placed in a three-dimensional space, and the specific operation is:
determining two-dimensional coordinates of a target position where the virtual object is placed on the display screen;
and taking the two-dimensional coordinates as a starting point, emitting a ray along the direction vertical to the display screen, and determining the three-dimensional coordinates of the intersection point of the ray and the virtual screen corresponding to the display screen in the three-dimensional space as the three-dimensional coordinates of the target position.
The Processor referred to in fig. 7 in this embodiment may be a Central Processing Unit (CPU), a general purpose Processor, a Graphics Processing Unit (GPU), a Digital Signal Processor (DSP), an Application-specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, a transistor logic device, a hardware component, or any combination thereof. Which may implement or execute the various illustrative logical blocks, modules, and circuits described in connection with the disclosure herein. A processor may also be a combination of computing functions, e.g., comprising one or more microprocessors, a combination of a DSP and a microprocessor, and the like. Wherein the memory may be integrated in the processor or may be provided separately from the processor.
It should be noted that fig. 7 is only an example, and shows hardware necessary for the display device with AR function to perform the method steps of displaying a virtual object based on illumination of spatial positions provided in the embodiment of the present application, which is not shown, and the display device also includes common hardware of a human-computer interaction device, such as a speaker, a microphone, a mouse, a keyboard, and the like.
Embodiments of the present application further provide a computer-readable storage medium for storing instructions, which when executed, may implement the method of the foregoing embodiments.
The embodiments of the present application also provide a computer program product for storing a computer program, where the computer program is used to execute the method of the foregoing embodiments.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.

Claims (10)

1. A method for displaying a virtual object based on illumination of a spatial position is applied to an Augmented Reality (AR) scene and comprises the following steps:
responding to the operation of placing a virtual object on a real picture, and constructing a three-dimensional point cloud data set corresponding to the real picture by adopting a synchronous positioning and mapping SLAM technology;
determining three-dimensional coordinates of a target position where the virtual object is placed in a three-dimensional space;
acquiring a target point cloud data set contained in a cuboid which takes the three-dimensional coordinate as a center and has preset length, width and height as sides from the three-dimensional point cloud data set;
determining an illumination information graph corresponding to the target position according to the target point cloud data set;
and rendering the virtual object placed at the target position according to the illumination information graph, and displaying the virtual object on the real picture in an overlapping manner.
2. The method of claim 1, wherein determining the illumination information map corresponding to the target location from the target point cloud dataset comprises:
determining at least one surface contained in the target point cloud data set to obtain a surface set, wherein the surface in the surface set is a plane or a curved surface;
uniformly dividing a plurality of directional rays in the three-dimensional space by taking the three-dimensional coordinates as an origin;
for each directional ray, if the directional ray intersects with the target point cloud data which is closest to the origin in the cuboid, using the color information of the closest target point cloud data as the color information of a subsphere corresponding to the spherical center angle of the directional ray; or if the direction ray intersects with one surface in the surface set, interpolating color information of target point cloud data in a preset range on the surface and color information of the intersected point cloud data, and taking the color information after interpolation as color information of a subsphere corresponding to the central angle of the direction ray;
and obtaining a panoramic image corresponding to a spherical surface with the three-dimensional coordinate as a sphere center and a preset length as a radius according to the color information of each subsphere determined by the rays in the multiple directions, and taking the panoramic image as an illumination information image corresponding to the target position.
3. The method of claim 2, wherein when there are remaining sub-spherical surfaces in the spherical surface to which no color information is given, the method further comprises:
and interpolating the color information of the subspheres adjacent to the residual subspheres on the spherical surface to obtain the color information of the residual subspheres.
4. The method of claim 1, wherein rendering the virtual object placed at the target location according to the lighting information map comprises:
the illumination information graph is used as a sky box surrounding the virtual object, the sky box is set to be invisible so as to avoid blocking the real picture, and the sky box comprises illumination textures in the upper direction, the lower direction, the left direction, the right direction and the front direction;
and rendering the virtual object placed at the target position according to the illumination texture of each direction of the sky box.
5. The method of claim 1, wherein determining three-dimensional coordinates in three-dimensional space of a target location for placement of the virtual object comprises:
determining two-dimensional coordinates of a target position where the virtual object is placed on the display screen;
and taking the two-dimensional coordinates as a starting point, emitting a ray along the direction vertical to the display screen, and determining the three-dimensional coordinates of the intersection point of the ray and the virtual screen corresponding to the display screen in the three-dimensional space as the three-dimensional coordinates of the target position.
6. The display equipment is characterized by supporting an Augmented Reality (AR) function and comprising a processor, a memory, a camera and a display screen, wherein the display screen, the camera and the memory are connected with the processor through a bus;
the camera is used for collecting real pictures;
the memory stores a computer program, and the processor performs the following operations according to the computer program:
responding to the operation of placing a virtual object on a real picture displayed by the display screen, and constructing a three-dimensional point cloud data set corresponding to the real picture by adopting a synchronous positioning and mapping SLAM technology;
determining three-dimensional coordinates of a target position where the virtual object is placed in a three-dimensional space;
acquiring a target point cloud data set contained in a cuboid which takes the three-dimensional coordinate as a center and has preset length, width and height as sides from the three-dimensional point cloud data set;
determining an illumination information graph corresponding to the target position according to the target point cloud data set;
and rendering the virtual object placed at the target position according to the illumination information graph, and displaying the virtual object on the real picture in an overlapping manner through the display screen.
7. The display device of claim 6, wherein the processor determines, from the target point cloud dataset, an illumination information map corresponding to the target location by:
determining at least one surface contained in the target point cloud data set to obtain a surface set, wherein the surface in the surface set is a plane or a curved surface;
uniformly dividing a plurality of directional rays in the three-dimensional space by taking the three-dimensional coordinates as an origin;
for each directional ray, if the directional ray intersects with the target point cloud data which is closest to the origin in the cuboid, using the color information of the closest target point cloud data as the color information of a subsphere corresponding to the spherical center angle of the directional ray; or if the direction ray intersects with one surface in the surface set, interpolating color information of target point cloud data in a preset range on the surface and color information of the intersected point cloud data, and taking the color information after interpolation as color information of a subsphere corresponding to the central angle of the direction ray;
and obtaining a panoramic image corresponding to a spherical surface with the three-dimensional coordinate as a sphere center and a preset length as a radius according to the color information of each subsphere determined by the rays in the multiple directions, and taking the panoramic image as an illumination information image corresponding to the target position.
8. The display device of claim 7, wherein when there are remaining sub-spheres of the sphere that are not assigned color information, the processor further performs the following:
and interpolating the color information of the subsphere surface adjacent to the residual subsphere surface on the spherical surface to obtain the color information of the residual subsphere surface.
9. The display device of claim 6, wherein the processor renders the virtual object placed at the target location according to the lighting information map by:
the illumination information graph is used as a sky box surrounding the virtual object, the sky box is set to be invisible so as to avoid blocking the real picture, and the sky box comprises illumination textures in the upper direction, the lower direction, the left direction, the right direction and the front direction;
and rendering the virtual object placed at the target position according to the illumination texture of each direction of the sky box.
10. The display device of claim 6, wherein the processor determines three-dimensional coordinates in three-dimensional space of a target location for placement of the virtual object by:
determining two-dimensional coordinates of a target position where the virtual object is placed on the display screen;
and taking the two-dimensional coordinates as a starting point, emitting a ray along the direction vertical to the display screen, and determining the three-dimensional coordinates of the intersection point of the ray and the virtual screen corresponding to the display screen in the three-dimensional space as the three-dimensional coordinates of the target position.
CN202210206906.3A 2022-03-04 2022-03-04 Method and equipment for displaying virtual object by illumination based on spatial position Pending CN114663632A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210206906.3A CN114663632A (en) 2022-03-04 2022-03-04 Method and equipment for displaying virtual object by illumination based on spatial position

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210206906.3A CN114663632A (en) 2022-03-04 2022-03-04 Method and equipment for displaying virtual object by illumination based on spatial position

Publications (1)

Publication Number Publication Date
CN114663632A true CN114663632A (en) 2022-06-24

Family

ID=82028202

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210206906.3A Pending CN114663632A (en) 2022-03-04 2022-03-04 Method and equipment for displaying virtual object by illumination based on spatial position

Country Status (1)

Country Link
CN (1) CN114663632A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115330805A (en) * 2022-10-17 2022-11-11 江苏贯森新材料科技有限公司 Laser radar-based method for detecting abrasion of high-voltage cable protective layer at metal bracket
CN115631291A (en) * 2022-11-18 2023-01-20 如你所视(北京)科技有限公司 Real-time re-illumination method and apparatus, device, and medium for augmented reality

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115330805A (en) * 2022-10-17 2022-11-11 江苏贯森新材料科技有限公司 Laser radar-based method for detecting abrasion of high-voltage cable protective layer at metal bracket
CN115631291A (en) * 2022-11-18 2023-01-20 如你所视(北京)科技有限公司 Real-time re-illumination method and apparatus, device, and medium for augmented reality

Similar Documents

Publication Publication Date Title
US20220326844A1 (en) Displaying a three dimensional user interface
KR101636808B1 (en) Dynamic graphical interface shadows
US8803879B1 (en) Omnidirectional shadow texture mapping
US9082213B2 (en) Image processing apparatus for combining real object and virtual object and processing method therefor
US10719912B2 (en) Scaling and feature retention in graphical elements defined based on functions
US10628995B2 (en) Anti-aliasing of graphical elements defined based on functions
US9530242B2 (en) Point and click lighting for image based lighting surfaces
CN108154548A (en) Image rendering method and device
CN103247072B (en) The method and device at three-dimensional rotation interface is realized based on Android system
CN114663632A (en) Method and equipment for displaying virtual object by illumination based on spatial position
CN105631923B (en) A kind of rendering intent and device
JP2023171435A (en) Device and method for generating dynamic virtual content in mixed reality
CN112184873B (en) Fractal graph creation method, fractal graph creation device, electronic equipment and storage medium
US11562545B2 (en) Method and device for providing augmented reality, and computer program
Cosco et al. Augmented touch without visual obtrusion
US20120229463A1 (en) 3d image visual effect processing method
KR101919077B1 (en) Method and apparatus for displaying augmented reality
EP3062293A1 (en) Shadow rendering apparatus and control method thereof
US9483873B2 (en) Easy selection threshold
KR20060131145A (en) Randering method of three dimension object using two dimension picture
RU2680355C1 (en) Method and system of removing invisible surfaces of a three-dimensional scene
JP2023527438A (en) Geometry Recognition Augmented Reality Effect Using Real-time Depth Map
CN114820980A (en) Three-dimensional reconstruction method and device, electronic equipment and readable storage medium
US12002165B1 (en) Light probe placement for displaying objects in 3D environments on electronic devices
US20240169653A1 (en) Neural networks to render textured materials on curved surfaces

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination