CN113012299A - Display method and device, equipment and storage medium - Google Patents

Display method and device, equipment and storage medium Download PDF

Info

Publication number
CN113012299A
CN113012299A CN202110198389.5A CN202110198389A CN113012299A CN 113012299 A CN113012299 A CN 113012299A CN 202110198389 A CN202110198389 A CN 202110198389A CN 113012299 A CN113012299 A CN 113012299A
Authority
CN
China
Prior art keywords
image data
effect
virtual object
image
target image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110198389.5A
Other languages
Chinese (zh)
Inventor
侯欣如
栾青
郑少林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sensetime Technology Development Co Ltd
Original Assignee
Beijing Sensetime Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sensetime Technology Development Co Ltd filed Critical Beijing Sensetime Technology Development Co Ltd
Priority to CN202110198389.5A priority Critical patent/CN113012299A/en
Publication of CN113012299A publication Critical patent/CN113012299A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/60Shadow generation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/239Image signal generators using stereoscopic image cameras using two 2D image sensors having a relative position equal to or related to the interocular distance
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/296Synchronisation thereof; Control thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/21Collision detection, intersection

Abstract

The embodiment of the application discloses a display method, a display device, equipment and a storage medium; wherein the method comprises the following steps: according to the acquired first image data of the current real scene, three-dimensional reconstruction is carried out on the current real scene to obtain a three-dimensional scene model matched with the current real scene; acquiring second image data of the current real scene and rendering effect parameters corresponding to the second image data; and under the condition that the specific virtual object is rendered on the second image data by using the rendering effect parameter, generating an Augmented Reality (AR) effect in which the second image data and the specific virtual object are superposed according to the three-dimensional scene model, and displaying the AR effect on a display device.

Description

Display method and device, equipment and storage medium
Technical Field
The embodiment of the application relates to Augmented Reality (AR) technology, and relates to, but is not limited to, a display method and apparatus, a device, and a storage medium.
Background
The AR technology is a technology for skillfully fusing a specific virtual object and a real scene, and a plurality of technical means such as multimedia, three-dimensional modeling, real-time tracking and registration, intelligent interaction, sensing and the like are widely applied, and the specific virtual object such as characters, images, three-dimensional models, music, videos and the like generated by a computer is applied to the real scene after analog simulation, and the two kinds of information supplement each other, so that the 'enhancement' of the real scene is realized. The reality of the AR effect directly influences the real experience of the user.
Disclosure of Invention
In view of this, the display method, the display device, the display apparatus, and the storage medium provided in the embodiments of the present application can generate an AR effect in which a real scene image (i.e., second image data) is superimposed on the specific virtual object according to the real-time reconstructed three-dimensional scene model, so that the rendering processing of collision, occlusion, shadow, and the like on the specific virtual object can be realized, thereby making the AR effect more real and improving the AR experience of the user; the display method, the display device, the display equipment and the storage medium are realized as follows:
the display method provided by the embodiment of the application comprises the following steps: according to the acquired first image data of the current real scene, three-dimensional reconstruction is carried out on the current real scene to obtain a three-dimensional scene model matched with the current real scene; acquiring second image data of a current real scene and rendering effect parameters corresponding to the second image data; and under the condition that the specific virtual object is rendered on the second image data by using the rendering effect parameter, generating an AR effect of the second image data and the specific virtual object which are superposed according to the three-dimensional scene model, and displaying the AR effect on a display device. Therefore, the generated AR effect is more real based on the real-time three-dimensional model reconstruction.
In some embodiments, the method further comprises: acquiring an image set of the current real scene, wherein the image set is obtained by shooting a binocular camera of the display equipment for multiple times in a first time period before the current moment; from the set of images, a target image pair is selected as the first image data.
Therefore, the depth information of the object with a longer distance can be calculated through the image data acquired by the binocular camera, so that the reconstructed three-dimensional scene model comprises the three-dimensional model of the object with a longer distance, and the application scene of the display method is wider.
In some embodiments, said selecting a target image pair from said set of images as said first image data comprises: selecting a first target image and a second target image with shooting time difference meeting synchronization precision from the image set as the target image pair, and using the target image pair as the first image data; the first target image is obtained by shooting a first camera of the binocular camera in the current real scene; the second target image is obtained by shooting a second camera of the binocular camera in the current real scene.
Therefore, two images with shooting time difference meeting the synchronization precision are selected as first image data, so that the depth information of the sampling points determined based on the image data is more accurate, the reconstructed three-dimensional scene model is closer to the real situation, and the real degree of the final AR effect is improved.
In some embodiments, the method further comprises: acquiring a historical target image pair set, wherein the historical target image pair set comprises a plurality of historical target image pairs obtained by shooting in a second time period before the current time and shooting time difference of each historical target image pair; and adjusting hardware parameters of the binocular camera according to the shooting time difference of each historical target image pair so as to reduce the shooting time difference of the binocular camera at the next moment.
Therefore, the synchronization precision of binocular camera hardware is continuously improved, the accuracy of subsequent three-dimensional reconstruction is improved, and the AR effect is more real.
In some embodiments, the adjusting the hardware parameters of the binocular camera according to the shooting time difference of each historical target image pair to reduce the shooting time difference of the binocular camera at the next moment includes: determining the mean value of the shooting time difference according to the shooting time difference of each historical target image pair; and adjusting hardware parameters of the binocular camera according to the average value so that the shooting time difference of the binocular camera at the next moment is smaller than the average value.
In some embodiments, the generating, according to the three-dimensional scene model, an AR effect of the second image data superimposed with the specific virtual object includes: according to the three-dimensional scene model, performing collision detection on the specific virtual object to obtain a collision detection result; generating an AR effect of the second image data and the specific virtual object in a superposition mode according to the collision detection result; this makes the AR effect more realistic.
In some embodiments, the generating, according to the three-dimensional scene model, an AR effect of the second image data superimposed with the specific virtual object includes: determining shadow information in the three-dimensional scene model; according to the shadow information in the three-dimensional scene model, performing shadow effect rendering on the specific virtual object to obtain a virtual object with a shadow effect; generating an AR effect of the second image data superimposed with the virtual object having the shadow effect; this makes the AR effect more realistic.
In some embodiments, the generating, according to the three-dimensional scene model, an AR effect of the second image data superimposed with the specific virtual object includes: determining an occlusion relationship between the particular virtual object and an object in the three-dimensional scene model; generating an AR effect of the second image data and the specific virtual object in an overlapped mode according to the shielding relation; this makes the AR effect more realistic.
In some embodiments, the generating, according to the three-dimensional scene model, an AR effect of the second image data superimposed with the specific virtual object includes: determining a target object with a distance to the binocular camera smaller than a specific threshold value from the three-dimensional scene model; and superposing the specific virtual object on the target object contained in the second image data to obtain an AR effect.
In this way, a specific virtual object is superimposed on a target object (i.e., a real object) that is close to the user, so that the user does not feel a sense of clutter when viewing the AR effect.
In some embodiments, the superimposing the specific virtual object on the target object included in the second image data to obtain an AR effect includes: determining a target image area to be subjected to fuzzification processing in the second image data, wherein the target image area comprises a partial area outside an area where the target object is located; blurring the target image area in the second image data to obtain a background blurring image; superposing the specific virtual object on the target object of the background blurring image to obtain the AR effect;
in this way, the target object and the specific virtual object are clearly displayed, and other areas are obscured, thereby helping the user quickly find the target.
In some embodiments, the generating, according to the three-dimensional scene model, an AR effect of the second image data superimposed with the specific virtual object includes: carrying out image segmentation on the second image data to obtain a sub-image set; and according to the three-dimensional scene model, superposing the specific virtual object on one or more sub-images in the sub-image set to obtain the AR effect.
Therefore, virtual information is superposed based on image segmentation, so that superposition is more accurate, and the AR effect is better.
The display device that this application embodiment provided includes: the reconstruction module is used for performing three-dimensional reconstruction on the current real scene according to the acquired first image data of the current real scene to obtain a three-dimensional scene model matched with the current real scene; the acquisition module is used for acquiring second image data of the current real scene and rendering effect parameters corresponding to the second image data; a generating module, configured to generate, according to the three-dimensional scene model, an AR effect in which the second image data and a specific virtual object are superimposed when the specific virtual object is rendered on the second image data using the rendering effect parameter; and the display module is used for displaying the AR effect on display equipment.
The electronic device provided by the embodiment of the application comprises a memory and a processor, wherein the memory stores a computer program capable of running on the processor, and the processor executes the program to realize the steps in the display method provided by the embodiment of the application.
The computer-readable storage medium provided by the embodiment of the present application stores thereon a computer program, and the computer program, when executed by a processor, implements the steps in the display method described in the embodiment of the present application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and, together with the description, serve to explain the principles of the application.
Fig. 1A is a schematic structural diagram of an AR implementation architecture according to an embodiment of the present disclosure;
fig. 1B is a schematic structural diagram of a terminal according to an embodiment of the present application;
fig. 2A is a schematic flow chart illustrating an implementation of a display method according to an embodiment of the present disclosure;
fig. 2B is a schematic diagram of an implementation process for generating an AR effect according to an embodiment of the present disclosure;
fig. 2C is a schematic view of another implementation flow for generating an AR effect according to the embodiment of the present application;
fig. 2D is a schematic diagram of another implementation flow for generating an AR effect according to the embodiment of the present application;
fig. 3 is a schematic flow chart illustrating an implementation of another display method according to an embodiment of the present application;
FIG. 4 is a schematic diagram illustrating AR effect display provided in an embodiment of the present application;
fig. 5 is a schematic structural diagram of a display device according to an embodiment of the present disclosure;
fig. 6 is a hardware entity diagram of an electronic device according to an embodiment of the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, specific technical solutions of the present application will be described in further detail below with reference to the accompanying drawings in the embodiments of the present application. The following examples are intended to illustrate the present application but are not intended to limit the scope of the present application.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the present application only and is not intended to be limiting of the application.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is understood that "some embodiments" may be the same subset or different subsets of all possible embodiments, and may be combined with each other without conflict.
It should be noted that the terms "first \ second \ third" referred to in the embodiments of the present application merely distinguish similar or different objects and do not represent a specific ordering with respect to the objects, and it should be understood that "first \ second \ third" may be interchanged under certain ordering or sequence circumstances to enable the embodiments of the present application described herein to be implemented in other orders than illustrated or described herein.
The AR implementation architecture described in this application is for more clearly illustrating the technical solutions of the embodiments of the present application, and does not constitute a limitation on the technical solutions provided by the embodiments of the present application. As can be known to those skilled in the art, as the AR technology evolves, the technical solutions provided in the embodiments of the present application are also applicable to similar technical problems.
An AR implementation architecture is provided in an embodiment of the present application, and fig. 1A is a schematic structural diagram of the AR implementation architecture provided in the embodiment of the present application, as shown in fig. 1A, the architecture 10 includes a terminal 101 and a server 102. The terminal 101 and the server 102 may be connected via a network 103. The terminal may be a mobile terminal (e.g. a smartphone), a transparent display device, a slidable display device (e.g. a display screen slidable on a track), or a head-mounted device (e.g. AR glasses), etc. The server may be various types of devices with display capabilities, for example, the server may be a stand-alone server or a server cluster composed of a plurality of servers.
For example, in some embodiments, the terminal 101 may capture a current real scene through a camera module, to obtain first image data and second image data; then, the terminal 101 transmits the two image data to the server 102 through the network 103; the server 102 carries out three-dimensional reconstruction on the current real scene according to the first image data to obtain a three-dimensional scene model matched with the current real scene; the server 102 acquires rendering effect parameters corresponding to the second image data; the server 102 generates an AR effect in which the second image data and the specific virtual object are superimposed according to the three-dimensional scene model when the specific virtual object is rendered on the second image data by using the rendering effect parameter; the server 102 transmits the obtained AR effect to the terminal 101 through the network 103 to cause the terminal 101 to display the received AR effect on the display device.
For another example, in some embodiments, the AR implementation architecture includes a terminal, that is, the process for implementing AR display is implemented by the terminal. For example, the terminal is a head-mounted device, fig. 1B shows a structure of the terminal, and as shown in fig. 1B, the terminal 11 may include: a camera module 111, a processor 112 and a display device 113; the camera module 111 is configured to perform image acquisition on a current real scene, and transmit acquired first image data and second image data to the processor 112; the processor 112 performs three-dimensional reconstruction on the real scene according to the first image data to obtain a three-dimensional scene model matched with the current real scene; the processor 112 obtains a rendering effect parameter corresponding to the second image data; the processor 112 generates an AR effect of the second image data and the specific virtual object superimposed on each other according to the three-dimensional scene model in a case where the specific virtual object is rendered on the second image data using the rendering effect parameter; the processor 112 then transmits the AR effect to the display device 113; the display device 113 displays the AR effect.
The display method provided by the embodiment of the application is applied to electronic equipment. The electronic device may be a server or a terminal, and the embodiment of the present application is not limited thereto. The server may be an independent server or a server cluster composed of a plurality of servers. The terminal may also be diverse, for example, the terminal may be a mobile terminal (e.g., a smartphone), a transparent display device, a slidable display device (e.g., a display screen slidable on a track), or a head-mounted device (e.g., AR glasses), etc.
An embodiment of the present application provides a display method, and fig. 2A is a schematic diagram illustrating an implementation flow of the display method according to the embodiment of the present application, and as shown in fig. 2A, the method may include the following steps 21 to 24:
and step 21, performing three-dimensional reconstruction on the current real scene according to the acquired first image data of the current real scene to obtain a three-dimensional scene model matched with the current real scene.
The first image data may be obtained by photographing a current real scene by a binocular camera of the display apparatus, such that the first image data includes two images, for example, a first target image and a second target image as described in the following embodiments. The first image data may also be obtained by scanning the current real scene through a depth camera, so that the first image data includes depth information of sampling points of the current real scene. The three-dimensional reconstruction technology is used for reconstructing a three-dimensional virtual model of the surface of an object in a current real scene in an electronic device and constructing a complete three-dimensional model of the object. The three-dimensional reconstruction algorithms corresponding to different first image data are different, and are not described herein again.
The depth camera may be, for example, Time Of Flight (TOF), structured light, laser scanning, or the like.
Step 22, obtaining second image data of the current real scene and rendering effect parameters corresponding to the second image data.
The second image data may be obtained from the first image data, or may be obtained by shooting through another module independent of the camera module for collecting the first image data, which is not limited in this application. The second image data may be a color image, a grayscale image, or an image of another format. For example, the second image data is a Red Green Blue (RGB) image.
In some embodiments, the rendering effect parameter may be a sticker of the virtual object and/or a texture of the virtual object, or the like.
Step 23, in a case that a specific virtual object is rendered on the second image data by using the rendering effect parameter, generating an AR effect in which the second image data and the specific virtual object are superimposed according to the three-dimensional scene model;
and 24, displaying the AR effect on a display device.
The display device may or may not be an electronic device that performs the display method described above. For example, when the electronic device executing the display method is a terminal, the display device is a display module on the terminal. For another example, when the electronic device executing the display method is a server, the display device is a terminal-side device, and the server sends the generated AR effect to the display device through a network, so as to display the received AR effect on the display device.
In the embodiment of the application, according to the acquired first image data of the current real scene, three-dimensional reconstruction is carried out on the current real scene to obtain a three-dimensional scene model matched with the current real scene; therefore, the generated AR effect is more real based on real-time three-dimensional model reconstruction. For example, given a three-dimensional model of the current real scene (i.e., a three-dimensional scene model), rendering processing of collision, occlusion, shadow, etc. of a particular virtual object can be implemented, thereby making the AR effect more realistic.
For an application scene capable of displaying a collision effect, in some embodiments, as shown in fig. 2B, the electronic device may implement the following steps 2311 to 2312 to generate an AR effect in which the second image data is superimposed on the specific virtual object according to the three-dimensional scene model in step 23:
and 2311, performing collision detection on the specific virtual object according to the three-dimensional scene model to obtain a collision detection result.
In some embodiments, the electronic device may determine a relative positional relationship between the virtual object and a collision plane in the three-dimensional scene model; and performing virtual-real collision detection according to the relative position relation, so as to perform virtual-real collision response according to a collision detection result, and modify the AR effect at the previous moment to present a collision effect.
Step 2312, generating an AR effect of the second image data superimposed on the specific virtual object according to the collision detection result.
In the embodiment of the application, the three-dimensional scene model of the current real scene is known, that is, the collision detection can be performed on the specific virtual object to be superposed, so that an AR effect in which the second image data and the specific virtual object are superposed can be generated according to the collision detection result, that is, an AR image with a collision effect is generated compared with the previous AR image; thus, the user can feel more real visually. For example, if a three-dimensional model of the current real scene is not reconstructed, then a virtual ball is thrown into the scene, and the AR shows no collision effect, i.e. no bounce phenomenon. If the three-dimensional model of the current real scene is known, the rubber ball can rebound when colliding with the ground or other objects, namely, compared with the AR image at the previous moment, the generated AR effect can present the rebound phenomenon, and therefore the real feeling of the vision of the user is enhanced.
For an application scene capable of displaying a shadow effect, in some embodiments, as shown in fig. 2C, the electronic device may implement the AR effect generated by the step 23 and overlapping the second image data with the specific virtual object according to the three-dimensional scene model through the following steps 2321 to 2323:
step 2321, determining shadow information in the three-dimensional scene model.
The three-dimensional scene model can carry the ambient light information of the current real scene, and the electronic equipment can determine the shadow information in the three-dimensional scene model according to the ambient light information. The ambient light information may include at least one of: light source position, light source color temperature, light source darkness and environmental texture information.
Step 2322, according to the shadow information in the three-dimensional scene model, performing shadow effect rendering on the specific virtual object to obtain a virtual object with a shadow effect;
step 2323, an AR effect of the second image data superimposed with the virtual object with the shadow effect is generated.
Understandably, if the shadow information in the three-dimensional scene model is known, a virtual object with a shadow effect can be obtained, so that the shadow of the virtual object appears in the generated AR effect; thus, the visual perception of the user is more real.
For an application scene capable of displaying an occlusion effect, in some embodiments, as shown in fig. 2D, the electronic device may implement the AR effect generated by the second image data and the specific virtual object superimposed according to the three-dimensional scene model in step 23 through steps 2331 to 2332 as follows:
step 2331, determine the occlusion relationship between the particular virtual object and the object in the three-dimensional scene model.
For example, according to the position of the superposition of a specific virtual object, the occlusion relationship between the virtual object and the objects in the three-dimensional scene model is determined, i.e. it is determined which objects should occlude the virtual object and which objects should be occluded by the virtual object.
Step 2332, generating an AR effect of the second image data superimposed on the specific virtual object according to the occlusion relationship.
For example, if a virtual tree is to be placed somewhere 3 meters away from the user's field of view, then objects behind the tree should be blocked and not visible to the user, while objects in front of the tree should block the tree so that natural laws are met. In the embodiment of the application, knowing the three-dimensional model of the current real scene, the occlusion relationship between the specific virtual object and the object in the three-dimensional scene model can be determined; therefore, a more real AR effect can be generated according to the shielding relation, and the AR experience of the user is further enhanced.
Given the three-dimensional scene model, the electronic device may also implement the display of other AR effects to meet more scene requirements. For example, given a three-dimensional scene model, the electronic device can determine which objects are closer to the user and which objects are farther from the user in the real scene where the user is located, so as to perform some processing to better meet the requirements of some application scenes, thereby improving the user experience.
In view of this, an embodiment of the present application provides a display method, and fig. 3 is a schematic flow chart illustrating an implementation of the display method according to the embodiment of the present application, as shown in fig. 3, the method may include the following steps 301 to 306:
step 301, acquiring an image set of a current real scene, where the image set is obtained by shooting a binocular camera of a display device for multiple times in a first time period before a current time.
The depth information of objects with longer distances can be obtained compared with the depth information based on TOF scanning through three-dimensional reconstruction based on image data shot by the binocular camera, so that the application scenes of the display method are wider. This is because TOF can only be used in close-range scenes, subject to the hardware itself. And the binocular camera can shoot all scene contents in the visual field, and can model objects with longer distance in a real scene based on the scene contents, so that the AR display scheme provided by the embodiment of the application has stronger universality and wider application range. Moreover, the cost of the TOF camera module is far higher than that of the binocular camera, so that the AR display scheme based on the binocular camera and the corresponding product are easier to popularize.
For example, the image set includes 40 images, and the 40 images are captured by the binocular camera within 1 second before the current time. For example, the current time is 10:55:03, and the 40 images are captured in a time period of 10:55:03 to 10:55: 04; wherein 20 different first images are captured by a first camera of a binocular camera and the remaining 20 different second images are captured by a second camera of the binocular camera.
Step 302, selecting a target image pair from the image set as first image data.
It will be appreciated that ideally, the time stamps of the two images taken by the binocular camera at a time should be identical. Thus, the depth information calculated based on the two images is correct and accords with the current actual situation. However, in practical application, due to the influence of the hardware of the binocular camera and other factors, the shooting time of two images shot by the binocular camera each time cannot be synchronized, and the shooting time of the two images obtained by shooting at the same time has a certain time difference, which causes the following problems: the depth information calculated based on the two images has a certain error, so that the accuracy of the three-dimensional scene model obtained by reconstructing based on the depth information of the pixel points is reduced, and the true degree of the final AR effect is obviously affected. In view of this, in the embodiment of the present application, two images with a small time difference, which are captured by a binocular camera, are found as much as possible, and the three-dimensional scene model is reconstructed based on the two images, so that the accuracy of the three-dimensional scene model can be improved, and the true degree of the final AR effect is improved.
In some embodiments, the electronic device may implement step 302 by: selecting a first target image and a second target image with shooting time difference meeting synchronization precision from the image set as the target image pair, and using the target image pair as the first image data; the first target image is obtained by shooting a first camera of the binocular camera in the current real scene; the second target image is obtained by shooting a second camera of the binocular camera in the current real scene.
In some embodiments, the electronic device may take as the target image pair the first target image and the second target image in the image set having the smallest difference in photographing time or having a difference in photographing time smaller than a threshold value. It can be understood that the smaller the time difference between the two images for constructing the three-dimensional scene model is, the closer the calculated depth information of the sampling points is to the real situation, and thus the closer the reconstructed three-dimensional model is to the real situation. The accuracy of the three-dimensional scene model is the basis of the good and bad AR effect.
For example, if the first camera 9:01 of the binocular camera takes an image and the second camera of the binocular camera takes an image at 9:03, then in a scene where the binocular camera moves rapidly, it is of little significance to perform three-dimensional reconstruction on the current real scene based on the two images. This is because the scene contents of the two image shots may be very different. Therefore, it is necessary to ensure that the shooting time of two cameras of the binocular camera is synchronous, otherwise, the reconstructed three-dimensional scene model is a model with a large error and does not conform to the current real scene, so that the subsequent display of the AR effect is incorrect, and the visual perception of the user is that the display content of the AR is very unreal.
Of course, the electronic device can also perform AR display and adjust the hardware parameters of the binocular camera at the same time, so that the synchronization precision of the shooting of the binocular camera is improved, and the shooting time difference of the binocular camera at the next moment is reduced. For example, in some embodiments, the electronic device may obtain a set of historical target image pairs including a plurality of historical target image pairs captured within a second time period prior to a current time and a capture time difference for each of the historical target image pairs; and adjusting hardware parameters of the binocular camera according to the shooting time difference of each historical target image pair so as to reduce the shooting time difference of the binocular camera at the next moment and even meet the synchronization precision.
It should be noted that the historical target image pair set may or may not include the target image pair.
It can be understood that it is the foundation to adjust the synchronization accuracy of the binocular camera hardware itself. For example, if the shooting time difference between two images is always 10 seconds in a plurality of selected historical target image pairs, the hardware parameters of the binocular camera can be adjusted based on the difference, so that the shooting time difference of the binocular camera is smaller than 10 seconds. Therefore, the hardware parameters of the binocular camera are repeatedly adjusted, so that the synchronization precision of the hardware of the binocular camera is continuously improved, the accuracy of subsequent three-dimensional reconstruction is improved, and the AR effect is more real.
For example, in some embodiments, the electronic device may determine a mean value of the photographing time differences according to the photographing time differences of each of the pairs of historical target images; and adjusting hardware parameters of the binocular camera according to the average value so that the shooting time difference of the binocular camera at the next moment is smaller than the average value.
Step 303, performing three-dimensional reconstruction on the current real scene according to the first image data to obtain a three-dimensional scene model matched with the current real scene;
step 304, obtaining second image data of the current real scene and rendering effect parameters corresponding to the second image data.
In some embodiments, the electronic device may implement step 304 by: controlling an RGB camera of the display equipment to shoot the current real scene to obtain second image data; the RGB camera does not belong to the binocular camera, for example, the binocular camera may be a dual-gray camera. In other embodiments, the electronic device may also implement step 304 by: and under the condition that the binocular camera at least comprises the RGB camera, selecting one frame of RGB image from the target image pair as the second image data.
It can be seen that there is and at least one RGB camera on the display device. That is, assuming that the binocular cameras used to acquire depth information are dual RGB cameras, there is no need for a single RGB camera. Therefore, a maximum of 3 cameras (e.g., dual grayscale cameras and RGB cameras) are required, and at least 2 cameras (e.g., dual RGB cameras) are required. At least one of the 2 or 3 cameras is an RGB camera used for presenting a color picture of the current real scene.
And 305, in the case of rendering a specific virtual object on the second image data by using the rendering effect parameter, determining a target object with a distance to the binocular camera smaller than a specific threshold value from the three-dimensional scene model.
The target object may be a real object or a space between real objects.
Step 306, superimposing the specific virtual object on the target object included in the second image data to obtain an AR effect.
For example, if a user is now standing on a street with as many buildings, it is assumed that a virtual poster is to be superimposed on these buildings. If these virtual posters are displayed simultaneously on the floor of each building in the field of view, it will appear to be tangy. In view of this, in the embodiment of the present application, it is only necessary to render the poster on the floor of the building close to the user, so that the poster looks disorderly. A specific virtual object is superimposed on a target object (i.e., a real object) that is close to the user, so that the user does not feel a sense of clutter when viewing the AR effect.
The particular virtual object may be varied. Different scene requirements and application requirements, the corresponding specific virtual objects may be different. For example, the particular virtual object is a poster, a group of fish, a flying dinosaur or a group of penguins, and so forth.
In some embodiments, the electronic device may implement step 306 by: determining a target image area to be subjected to fuzzification processing in the second image data, wherein the target image area comprises a partial area outside an area where the target object is located; blurring the target image area in the second image data to obtain a background blurring image; and superposing the specific virtual object on the target object of the background blurring image to obtain the AR effect. In this way, the target object and its virtual object (for example, the prompt information may be displayed clearly), and the partial region of the second image data outside the region where the target object is located is obscured, so as to help the user to find the target quickly.
For example, when a user is at a complicated place and needs to find a restaurant to eat, the AR can clearly display a restaurant and obscure other contents in the field of view. In this way, the user can be helped to find the restaurant quickly.
We mentioned above that, given a three-dimensional scene model, the electronic device can also implement the display of other AR effects to meet more scene requirements. For example, in the above embodiment, the electronic device superimposes the specific virtual object on the real object that is closer to the user first, so that the visual perception of the user is not cluttered. That is, the effect of generating the second image data and the specific virtual object according to the three-dimensional scene model in step 23 is realized through the above steps 305 and 306.
For another example, in some embodiments, the electronic device may also implement the generating, according to the three-dimensional scene model, the AR effect of the second image data superimposed on the specific virtual object in step 23: the electronic equipment carries out image segmentation on the second image data to obtain a sub-image set; and according to the three-dimensional scene model, superposing the specific virtual object on one or more sub-images in the sub-image set to obtain the AR effect. Therefore, superposition is more accurate, and the AR effect is better.
For example, if all buildings in the middle and high village are to be changed into castle, the buildings need to be divided first in order to make the AR display effect better. Therefore, virtual objects can be overlaid more accurately, and the buildings are changed into castle without disposing other parts such as the ground.
An exemplary application of the embodiments of the present application in a practical application scenario will be described below.
Acquiring an RGB picture (an example of the second image data) in real time for a current scene through a binocular camera; the system processes the RGB pictures collected in real time to carry out real-time modeling on the current real scene, can utilize a mature parallax modeling algorithm to carry out modeling in practical application, and presents an AR effect on a well-built three-dimensional scene model; the binocular camera comprises a double-gray camera and an RGB camera; the established three-dimensional scene model can be set to be completely or partially transparent, so that when a user watches through AR glasses, a mobile phone, a flat plate or a transparent large screen and other devices, the user can be provided with a feeling of seeing an AR effect in a real scene.
In some embodiments, each virtual object (i.e., a particular virtual object) may be rendered independently as a virtual model, or multiple virtual objects may be rendered as a whole.
In some embodiments, parts in the RGB picture may be blurred or overlaid with other processing effects. When the method is implemented, the system can set parameters in advance, and determine the area needing to be processed in the forms of image segmentation and the like.
In some embodiments, based on the timestamps of the images captured by the binocular cameras, timestamps of the same time (or with a time difference below a threshold) are selected to achieve binocular camera synchronization. The hardware configuration parameters of the camera can be adjusted by utilizing the timestamp when the binocular camera collects the images, so that the hardware configuration parameters of the camera are continuously adjusted, the shooting time of the images shot by the binocular camera is as close as possible, and the close time errors can be customized.
The following application scenario examples (but not limited to the following scenarios):
scene one: as shown in fig. 4, when the user holds the device to watch the sand table, the built three-dimensional scene model may be set to be completely transparent, so that when the user watches through the device, the user watches the real RGB picture shot by the camera of the device, and can see the AR effect. That is, the user's feeling is equivalent to the feeling of seeing the AR effect on the real scene.
Scene two: when a user views an AR special effect (such as a virtual dinosaur or a penguin) through a transparent screen, the established scene model may be set to be completely transparent. Thus, when viewed through the transparent screen, the real RGB picture 401 and the virtual information 402 (only part of which is identified in the figure) taken by the camera of the device are viewed, and the AR effect can be seen. The user's experience is equivalent to the experience of seeing the AR effect on a real scene, such as seeing a penguin walking around the office in the office.
Based on the foregoing embodiments, the display device provided in the embodiments of the present application may include each included module and each unit included in each module, and may be implemented by a processor in an electronic device; of course, the implementation can also be realized through a specific logic circuit; in the implementation, the processor may be a CPU, a microprocessor Unit (MPU), a Digital Signal Processor (DSP), a Field Programmable Gate Array (FPGA), a Graphics Processing Unit (GPU), or the like.
Fig. 5 is a schematic structural diagram of a display apparatus according to an embodiment of the present application, and as shown in fig. 5, the apparatus 500 includes a reconstruction module 501, an acquisition module 502, a generation module 503, and a display module 504, where:
the reconstruction module 501 is configured to perform three-dimensional reconstruction on the current real scene according to the acquired first image data of the current real scene, so as to obtain a three-dimensional scene model matched with the current real scene;
an obtaining module 502, configured to obtain second image data of the current real scene and a rendering effect parameter corresponding to the second image data;
a generating module 503, configured to, when a specific virtual object is rendered on the second image data by using the rendering effect parameter, generate an AR effect in which the second image data and the specific virtual object are superimposed according to the three-dimensional scene model;
a display module 504, configured to display the AR effect on a display device.
In some embodiments, the obtaining module 502 is further configured to: acquiring an image set of the current real scene, wherein the image set is obtained by shooting a binocular camera of the display equipment for multiple times in a first time period before the current moment; from the set of images, a target image pair is selected as the first image data.
In some embodiments, the obtaining module 502 is configured to: selecting a first target image and a second target image with shooting time difference meeting synchronization precision from the image set as the target image pair, and using the target image pair as the first image data; the first target image is obtained by shooting a first camera of the binocular camera in the current real scene; the second target image is obtained by shooting a second camera of the binocular camera in the current real scene.
In some embodiments, the display device 500 further comprises an adjustment module; the obtaining module 502 is further configured to: acquiring a historical target image pair set, wherein the historical target image pair set comprises a plurality of historical target image pairs obtained by shooting in a second time period before the current time and shooting time difference of each historical target image pair; the adjusting module is used for adjusting hardware parameters of the binocular camera according to the shooting time difference of each historical target image pair so as to reduce the shooting time difference of the binocular camera at the next moment.
In some embodiments, the adjustment module is to: determining the mean value of the shooting time difference according to the shooting time difference of each historical target image pair; and adjusting hardware parameters of the binocular camera according to the average value so that the shooting time difference of the binocular camera at the next moment is smaller than the average value.
In some embodiments, the generating module 503 is configured to: according to the three-dimensional scene model, performing collision detection on the specific virtual object to obtain a collision detection result; and generating an AR effect of the second image data and the specific virtual object in a superposition mode according to the collision detection result.
In some embodiments, the generating module 503 is configured to: determining shadow information in the three-dimensional scene model; according to the shadow information in the three-dimensional scene model, performing shadow effect rendering on the specific virtual object to obtain a virtual object with a shadow effect; generating an AR effect of the second image data superimposed with the virtual object having the shadow effect.
In some embodiments, the generating module 503 is configured to: determining an occlusion relationship between the particular virtual object and an object in the three-dimensional scene model; and generating an AR effect of the second image data and the specific virtual object which are superposed according to the shielding relation.
In some embodiments, the generating module 503 is configured to: determining a target object with a distance to the binocular camera smaller than a specific threshold value from the three-dimensional scene model; and superposing the specific virtual object on the target object contained in the second image data to obtain an AR effect.
In some embodiments, the generating module 503 is configured to: determining a target image area to be subjected to fuzzification processing in the second image data, wherein the target image area comprises a partial area outside an area where the target object is located; blurring the target image area in the second image data to obtain a background blurring image; superposing the specific virtual object on the target object of the background blurring image to obtain the AR effect;
in some embodiments, the generating module 503 is configured to: carrying out image segmentation on the second image data to obtain a sub-image set; and according to the three-dimensional scene model, superposing the specific virtual object on one or more sub-images in the sub-image set to obtain the AR effect.
The above description of the apparatus embodiments, similar to the above description of the method embodiments, has similar beneficial effects as the method embodiments. For technical details not disclosed in the embodiments of the apparatus of the present application, reference is made to the description of the embodiments of the method of the present application for understanding.
It should be noted that, in the embodiment of the present application, if the display method is implemented in the form of a software functional module and sold or used as a standalone product, the display method may also be stored in a computer readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present application may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a terminal or a server to execute all or part of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read Only Memory (ROM), a magnetic disk, or an optical disk. Thus, embodiments of the present application are not limited to any specific combination of hardware and software.
Correspondingly, as shown in fig. 6, in the electronic device 600 provided in the embodiment of the present application, the electronic device 600 may be a terminal or a server, and the electronic device 600 may include: comprising a memory 601 and a processor 602, said memory 601 storing a computer program operable on the processor 602, said processor 602 implementing the steps in the display method provided in the above embodiments when executing said program.
The Memory 601 is configured to store instructions and applications executable by the processor 602, and may also buffer data (e.g., image data, audio data, voice communication data, and video communication data) to be processed or already processed by the processor 602 and modules in the electronic device 600, and may be implemented by a FLASH Memory (FLASH) or a Random Access Memory (RAM).
Correspondingly, the computer-readable storage medium provided by the embodiment of the present application has a computer program stored thereon, and the computer program, when executed by a processor, implements the steps in the display method provided by the above-mentioned embodiment.
Here, it should be noted that: the above description of the storage medium and device embodiments is similar to the description of the method embodiments above, with similar advantageous effects as the method embodiments. For technical details not disclosed in the embodiments of the storage medium and apparatus of the present application, reference is made to the description of the embodiments of the method of the present application for understanding.
It should be appreciated that reference throughout this specification to "one embodiment" or "some embodiments" or "other embodiments" means that a particular feature, structure or characteristic described in connection with the embodiments is included in at least one embodiment of the present application. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" or "in some embodiments" or "in other embodiments" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. It should be understood that, in the various embodiments of the present application, the sequence numbers of the above-mentioned processes do not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application. The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described device embodiments are merely illustrative, for example, the division of the unit is only a logical functional division, and there may be other division ways in actual implementation, such as: multiple units or components may be combined, or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the coupling, direct coupling or communication connection between the components shown or discussed may be through some interfaces, and the indirect coupling or communication connection between the devices or units may be electrical, mechanical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units; can be located in one place or distributed on a plurality of network units; some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, all functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may be separately regarded as one unit, or two or more units may be integrated into one unit; the integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
Those of ordinary skill in the art will understand that: all or part of the steps for realizing the method embodiments can be completed by hardware related to program instructions, the program can be stored in a computer readable storage medium, and the program executes the steps comprising the method embodiments when executed; and the aforementioned storage medium includes: various media that can store program codes, such as a removable Memory device, a Read Only Memory (ROM), a magnetic disk, or an optical disk.
Alternatively, the integrated units described above in the present application may be stored in a computer-readable storage medium if they are implemented in the form of software functional modules and sold or used as independent products. Based on such understanding, the technical solutions of the embodiments of the present application may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a terminal or a server to execute all or part of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a removable storage device, a ROM, a magnetic or optical disk, or other various media that can store program code.
The methods disclosed in the several method embodiments provided in the present application may be combined arbitrarily without conflict to obtain new method embodiments.
Features disclosed in several of the product embodiments provided in the present application may be combined in any combination to yield new product embodiments without conflict.
The features disclosed in the several method or apparatus embodiments provided in the present application may be combined arbitrarily, without conflict, to arrive at new method embodiments or apparatus embodiments.
The above description is only for the embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (12)

1. A method of displaying, the method comprising:
according to the acquired first image data of the current real scene, three-dimensional reconstruction is carried out on the current real scene to obtain a three-dimensional scene model matched with the current real scene;
acquiring second image data of the current real scene and rendering effect parameters corresponding to the second image data;
and under the condition that the specific virtual object is rendered on the second image data by using the rendering effect parameter, generating an Augmented Reality (AR) effect in which the second image data and the specific virtual object are superposed according to the three-dimensional scene model, and displaying the AR effect on a display device.
2. The method of claim 1, further comprising:
acquiring an image set of the current real scene, wherein the image set is obtained by shooting a binocular camera of the display equipment for multiple times in a first time period before the current moment;
from the set of images, a target image pair is selected as the first image data.
3. The method of claim 2, wherein said selecting a target image pair from the set of images as the first image data comprises:
selecting a first target image and a second target image with shooting time difference meeting synchronization precision from the image set as the target image pair, and using the target image pair as the first image data;
the first target image is obtained by shooting a first camera of the binocular camera in the current real scene; the second target image is obtained by shooting a second camera of the binocular camera in the current real scene.
4. The method of claim 3, further comprising:
acquiring a historical target image pair set, wherein the historical target image pair set comprises a plurality of historical target image pairs obtained by shooting in a second time period before the current time and shooting time difference of each historical target image pair;
and adjusting hardware parameters of the binocular camera according to the shooting time difference of each historical target image pair so as to reduce the shooting time difference of the binocular camera at the next moment.
5. The method according to claim 4, wherein the adjusting hardware parameters of the binocular camera according to the shooting time difference of each historical target image pair to reduce the shooting time difference of the binocular camera at the next moment comprises:
determining the mean value of the shooting time difference according to the shooting time difference of each historical target image pair;
and adjusting hardware parameters of the binocular camera according to the average value so that the shooting time difference of the binocular camera at the next moment is smaller than the average value.
6. The method according to any of claims 1 to 5, wherein said generating an AR effect of said second image data superimposed with said specific virtual object according to said three-dimensional scene model comprises:
determining shadow information in the three-dimensional scene model;
according to the shadow information in the three-dimensional scene model, performing shadow effect rendering on the specific virtual object to obtain a virtual object with a shadow effect;
generating an AR effect of the second image data superimposed with the virtual object having the shadow effect.
7. The method according to any of claims 2 to 5, wherein said generating an AR effect of said second image data superimposed with said specific virtual object according to said three-dimensional scene model comprises:
determining a target object with a distance to the binocular camera smaller than a specific threshold value from the three-dimensional scene model;
and superposing the specific virtual object on the target object contained in the second image data to obtain an AR effect.
8. The method according to claim 7, wherein said superimposing the specific virtual object on the target object included in the second image data to obtain an AR effect comprises:
determining a target image area to be subjected to fuzzification processing in the second image data, wherein the target image area comprises a partial area outside an area where the target object is located;
blurring the target image area in the second image data to obtain a background blurring image;
and superposing the specific virtual object on the target object of the background blurring image to obtain the AR effect.
9. The method according to any of claims 1 to 5, wherein said generating an AR effect of said second image data superimposed with said specific virtual object according to said three-dimensional scene model comprises:
carrying out image segmentation on the second image data to obtain a sub-image set;
and according to the three-dimensional scene model, superposing the specific virtual object on one or more sub-images in the sub-image set to obtain the AR effect.
10. A display device, comprising:
the reconstruction module is used for performing three-dimensional reconstruction on the current real scene according to the acquired first image data of the current real scene to obtain a three-dimensional scene model matched with the current real scene;
the acquisition module is used for acquiring second image data of the current real scene and rendering effect parameters corresponding to the second image data;
a generating module, configured to generate an Augmented Reality (AR) effect in which the second image data and the specific virtual object are superimposed according to the three-dimensional scene model when the specific virtual object is rendered on the second image data using the rendering effect parameter;
and the display module is used for displaying the AR effect on display equipment.
11. An electronic device comprising a memory and a processor, the memory storing a computer program operable on the processor, wherein the processor implements the steps of the display method of any one of claims 1 to 9 when executing the program.
12. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the display method of any one of claims 1 to 9.
CN202110198389.5A 2021-02-22 2021-02-22 Display method and device, equipment and storage medium Pending CN113012299A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110198389.5A CN113012299A (en) 2021-02-22 2021-02-22 Display method and device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110198389.5A CN113012299A (en) 2021-02-22 2021-02-22 Display method and device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113012299A true CN113012299A (en) 2021-06-22

Family

ID=76406348

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110198389.5A Pending CN113012299A (en) 2021-02-22 2021-02-22 Display method and device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113012299A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113327316A (en) * 2021-06-30 2021-08-31 联想(北京)有限公司 Image processing method, device, equipment and storage medium
CN114615487A (en) * 2022-02-22 2022-06-10 聚好看科技股份有限公司 Three-dimensional model display method and equipment
CN115396656A (en) * 2022-08-29 2022-11-25 歌尔科技有限公司 AR SDK-based augmented reality method, system, device and medium
WO2023142400A1 (en) * 2022-01-27 2023-08-03 腾讯科技(深圳)有限公司 Data processing method and apparatus, and computer device, readable storage medium and computer program product
US11978170B2 (en) 2023-06-15 2024-05-07 Tencent Technology (Shenzhen) Company Limited Data processing method, computer device and readable storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6084979A (en) * 1996-06-20 2000-07-04 Carnegie Mellon University Method for creating virtual reality
CN109840947A (en) * 2017-11-28 2019-06-04 广州腾讯科技有限公司 Implementation method, device, equipment and the storage medium of augmented reality scene
CN111260769A (en) * 2020-01-09 2020-06-09 北京中科深智科技有限公司 Real-time rendering method and device based on dynamic illumination change
US20200273240A1 (en) * 2019-02-27 2020-08-27 Verizon Patent And Licensing Inc. Directional occlusion methods and systems for shading a virtual object rendered in a three-dimensional scene
CN111815780A (en) * 2020-06-30 2020-10-23 北京市商汤科技开发有限公司 Display method, display device, equipment and computer readable storage medium
CN111815782A (en) * 2020-06-30 2020-10-23 北京市商汤科技开发有限公司 Display method, device and equipment of AR scene content and computer storage medium
CN111862866A (en) * 2020-07-09 2020-10-30 北京市商汤科技开发有限公司 Image display method, device, equipment and computer readable storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6084979A (en) * 1996-06-20 2000-07-04 Carnegie Mellon University Method for creating virtual reality
CN109840947A (en) * 2017-11-28 2019-06-04 广州腾讯科技有限公司 Implementation method, device, equipment and the storage medium of augmented reality scene
US20200273240A1 (en) * 2019-02-27 2020-08-27 Verizon Patent And Licensing Inc. Directional occlusion methods and systems for shading a virtual object rendered in a three-dimensional scene
CN111260769A (en) * 2020-01-09 2020-06-09 北京中科深智科技有限公司 Real-time rendering method and device based on dynamic illumination change
CN111815780A (en) * 2020-06-30 2020-10-23 北京市商汤科技开发有限公司 Display method, display device, equipment and computer readable storage medium
CN111815782A (en) * 2020-06-30 2020-10-23 北京市商汤科技开发有限公司 Display method, device and equipment of AR scene content and computer storage medium
CN111862866A (en) * 2020-07-09 2020-10-30 北京市商汤科技开发有限公司 Image display method, device, equipment and computer readable storage medium

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113327316A (en) * 2021-06-30 2021-08-31 联想(北京)有限公司 Image processing method, device, equipment and storage medium
WO2023142400A1 (en) * 2022-01-27 2023-08-03 腾讯科技(深圳)有限公司 Data processing method and apparatus, and computer device, readable storage medium and computer program product
CN114615487A (en) * 2022-02-22 2022-06-10 聚好看科技股份有限公司 Three-dimensional model display method and equipment
CN114615487B (en) * 2022-02-22 2023-04-25 聚好看科技股份有限公司 Three-dimensional model display method and device
CN115396656A (en) * 2022-08-29 2022-11-25 歌尔科技有限公司 AR SDK-based augmented reality method, system, device and medium
US11978170B2 (en) 2023-06-15 2024-05-07 Tencent Technology (Shenzhen) Company Limited Data processing method, computer device and readable storage medium

Similar Documents

Publication Publication Date Title
CN113012299A (en) Display method and device, equipment and storage medium
CN109615703B (en) Augmented reality image display method, device and equipment
US9049428B2 (en) Image generation system, image generation method, and information storage medium
CN105704479B (en) The method and system and display equipment of the measurement human eye interpupillary distance of 3D display system
KR20140108128A (en) Method and apparatus for providing augmented reality
US20110216160A1 (en) System and method for creating pseudo holographic displays on viewer position aware devices
CN101631257A (en) Method and device for realizing three-dimensional playing of two-dimensional video code stream
KR20140082610A (en) Method and apaaratus for augmented exhibition contents in portable terminal
EP3547672A1 (en) Data processing method, device, and apparatus
CN114175097A (en) Generating potential texture proxies for object class modeling
CN107862718B (en) 4D holographic video capture method
US20100302234A1 (en) Method of establishing dof data of 3d image and system thereof
CN109791704B (en) Texture rendering method, system and device based on multi-layer UV mapping for free-running FVV application
CN111815786A (en) Information display method, device, equipment and storage medium
CN111833458A (en) Image display method and device, equipment and computer readable storage medium
WO2019050038A1 (en) Image generation method and image generation device
JP2015114905A (en) Information processor, information processing method, and program
CN105611267B (en) Merging of real world and virtual world images based on depth and chrominance information
CN113870213A (en) Image display method, image display device, storage medium, and electronic apparatus
CN108564654B (en) Picture entering mode of three-dimensional large scene
CN113178017A (en) AR data display method and device, electronic equipment and storage medium
CN112017242A (en) Display method and device, equipment and storage medium
CN110784728B (en) Image data processing method and device and computer readable storage medium
Mori et al. An overview of augmented visualization: observing the real world as desired
CN114793276A (en) 3D panoramic shooting method for simulation reality meta-universe platform

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210622