CN116560511A - Picture display method, device, computer equipment and storage medium - Google Patents

Picture display method, device, computer equipment and storage medium Download PDF

Info

Publication number
CN116560511A
CN116560511A CN202310615359.9A CN202310615359A CN116560511A CN 116560511 A CN116560511 A CN 116560511A CN 202310615359 A CN202310615359 A CN 202310615359A CN 116560511 A CN116560511 A CN 116560511A
Authority
CN
China
Prior art keywords
user object
video frame
avatar
display
virtual space
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310615359.9A
Other languages
Chinese (zh)
Inventor
孙超
李巍
潘卫敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Beimian Information Technology Co ltd
Original Assignee
Shanghai Beimian Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Beimian Information Technology Co ltd filed Critical Shanghai Beimian Information Technology Co ltd
Priority to CN202310615359.9A priority Critical patent/CN116560511A/en
Publication of CN116560511A publication Critical patent/CN116560511A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/012Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/04Indexing scheme for image data processing or generation, in general involving 3D image data

Abstract

The application relates to a picture presentation method, apparatus, computer device, storage medium and computer program product. The method comprises the following steps: information acquisition is carried out on the real environment to obtain control data; acquiring a video frame of a first user object acquired by video acquisition equipment according to the control data, and acquiring an avatar of a second user object correspondingly displayed by the first user object according to the control data; and under the condition that the video frames of the first user object and the avatar of the second user object are correspondingly distributed to the augmented reality equipment for display, displaying the video frames of the first user object and the avatar of the second user object on a display of the augmented reality equipment according to the position relation of the second user object relative to the first user object. By adopting the method, interaction between the real world and the virtual space can be realized, and more information can be displayed.

Description

Picture display method, device, computer equipment and storage medium
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to a method, an apparatus, a computer device, a storage medium, and a computer program product for displaying a picture.
Background
With the development of image processing technology, a realistic enhancement technology has emerged. Augmented reality (Augmented Reality, AR for short), which is a technology of calculating the position and orientation of a camera image in real time and adding a corresponding image, is a technology of integrating real world information and virtual world information into a new picture "seamlessly".
However, in the conventional reality augmentation technology, the built-in camera in the augmented reality device is directly used to collect the images and related information of each social member, and for the same user, information display is performed only through the user image with a single dimension, and the information quantity displayed on the interface is relatively large.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a screen display method, apparatus, computer device, computer-readable storage medium, and computer program product capable of controlling the amount of screen information.
In a first aspect, the present application provides a method for displaying a picture. The method comprises the following steps:
information acquisition is carried out on the real environment to obtain control data;
acquiring a video frame of a first user object acquired by video acquisition equipment according to the control data, and acquiring an avatar of a second user object correspondingly displayed by the first user object according to the control data;
And under the condition that the video frames of the first user object and the avatar of the second user object are correspondingly distributed to the augmented reality equipment for display, displaying the video frames of the first user object and the avatar of the second user object on a display of the augmented reality equipment according to the position relation of the second user object relative to the first user object.
In one embodiment, the augmented reality device comprises a vision sensor and an inertial sensor; the information acquisition is performed on the real environment to obtain control data, and the method comprises the following steps:
according to the sensor coupling relation between the visual sensor and the inertial navigation sensor, fusing the data respectively acquired by the visual sensor and the inertial navigation sensor to obtain fused sensor data;
determining a first user object position and a first user object orientation in a real environment according to the fused sensor data;
the video acquisition device acquires a video frame of a first user object acquired according to the control data, and acquires an avatar of a second user object correspondingly displayed by the first user object according to the control data, and the method comprises the following steps:
Acquiring information acquired by a first video acquisition device according to the position of the first user object and the orientation of the first user object, and acquiring a video frame of the first user object;
mapping the first user object position to a first virtual space position in a virtual space, and receiving an avatar of the virtual space within a preset range along the first user object direction to obtain an avatar of the second user object; and the virtual character image of the second user object is generated based on a second video frame acquired by second video acquisition equipment for the second user object.
In one embodiment, the obtaining, by the first video capturing device, a video frame of the first user object according to the information collected by the first user object position and the first user object orientation includes:
acquiring an object environment video frame within a preset range along the first user object direction at the first user object position under the condition that the first user object wears the augmented reality device;
acquiring a target part video frame acquired by the video acquisition equipment aiming at the position of the first user object under the condition that the object environment video frame exists;
Wherein the target site video frame is added to the object environment video frame.
In one embodiment, the displaying the video frame of the first user object and the avatar of the second user object on the display of the augmented reality device according to the positional relationship of the second user object with respect to the first user object includes:
determining a first virtual space position of the first user object in a virtual space;
selecting the first virtual space position according to the virtual space position relation between the second user object and the first user object to obtain a second virtual space position of the second user object in the virtual space;
displaying the video frame of the first user object on a display of the augmented reality device according to the first virtual spatial position;
and displaying the avatar of the second user object on the display according to the second virtual space position.
In one embodiment, the displaying the video frame of the first user object on the display of the augmented reality device according to the first virtual spatial position includes:
Mapping the first virtual space position according to the structural position relation between the target part of the first user object and the first user object to obtain the first video frame position;
and displaying the target part of the first user object on a display of the augmented reality device at the first video frame position.
In one embodiment, the method further comprises:
displaying an avatar of the first user object on a display of the augmented reality device according to a first virtual space position of the first user object in a virtual space;
and controlling the avatar of the first user object to move according to the action information extracted from the video frame of the first user object in the process of displaying the avatar of the first user object.
In one embodiment, the method further comprises:
according to a second virtual space position of the second user object in the virtual space, mapping according to a structural position relation between a target part of the second user object and the second user object, and obtaining a second video frame position;
displaying a target portion of the second user object on a display of the augmented reality device at the second video frame location;
And controlling the avatar of the second user object to move according to the action information extracted from the video frame of the second user object in the process of displaying the avatar of the second user object.
In a second aspect, the present application further provides a picture display device. The device comprises:
the control data acquisition module is used for acquiring information of the real environment to obtain control data;
the object information acquisition module is used for acquiring a video frame of a first user object acquired by video acquisition equipment according to the control data and acquiring an avatar image of a second user object correspondingly displayed by the first user object according to the control data;
and the object display module is used for displaying the video frames of the first user object and the avatar of the second user object on a display of the augmented reality device according to the position relation of the second user object relative to the first user object under the condition that the video frames of the first user object and the avatar of the second user object are correspondingly distributed to the augmented reality device for display.
In a third aspect, the present application also provides a computer device. The computer device comprises a memory storing a computer program and a processor implementing the steps of the picture presentation in any of the embodiments described above when the processor executes the computer program.
In a fourth aspect, the present application also provides a computer-readable storage medium. The computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the picture presentation in any of the embodiments described above.
In a fifth aspect, the present application also provides a computer program product. The computer program product comprises a computer program which, when executed by a processor, implements the steps of the picture presentation in any of the embodiments described above.
The picture display method, the picture display device, the computer equipment, the storage medium and the computer program product are used for acquiring information of a real environment to obtain control data; acquiring a video frame of a first user object acquired by video acquisition equipment according to the control data, and acquiring an avatar image of a second user object correspondingly displayed by the first user object according to the control data, so that two scenes of a real world and a virtual space are connected based on the control data, and under the condition that the real space is communicated with the virtual space, information of the two types of user objects is acquired through video acquisition equipment outside the augmented reality equipment, so that the acquired information quantity is increased, and the increased information quantity can be organically fused across the two scenes; under the condition that the video frames of the first user object and the avatar of the second user object are correspondingly distributed to the augmented reality equipment for display, displaying the video frames of the first user object and the avatar of the second user object on a display of the augmented reality equipment according to the position relation of the second user object relative to the first user object; the information of the first user object is fully displayed through the video frame of the first user object, and the information of the second user object is selectively displayed through the virtual character image of the second user object, so that object interaction between the real world and the virtual space is realized, and more information is displayed.
Drawings
FIG. 1 is an application environment diagram of a screen display method in one embodiment;
FIG. 2 is a flow chart of a method for displaying images according to an embodiment;
FIG. 3 is a schematic diagram of an external appearance of an augmented reality device in one embodiment;
FIG. 4 is a schematic diagram of an interface of an avatar in another embodiment;
FIG. 5 is a schematic diagram of an interface of an avatar in another embodiment;
FIG. 6 is a schematic diagram of a system architecture in one embodiment;
FIG. 7 is a schematic diagram of an avatar creation interface in one embodiment;
FIG. 8 is a schematic diagram of avatar actions in one embodiment;
FIG. 9 is a schematic diagram of a frame display device according to an embodiment;
fig. 10 is an internal structural view of a computer device in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
The picture display method provided by the embodiment of the application can be applied to an application environment shown in fig. 1. Wherein the terminal 102 communicates with the server 104 via a network. The data storage system may store data that the server 104 needs to process. The data storage system may be integrated on the server 104 or may be located on a cloud or other network server. It will be appreciated that the present scheme may also be implemented solely based on the terminal 102.
The terminal 102 may be, but not limited to, various display enhancement devices and corresponding video capture devices, where the enhancement devices and corresponding video capture devices may be personal computers, notebook computers, smart phones, tablet computers, internet of things devices with cameras and portable wearable devices, and the internet of things devices with cameras may be smart speakers, smart televisions, smart air conditioners, smart vehicle devices, or other devices with cameras. The portable wearable device carrying the camera may be a smart watch, smart bracelet, headset, etc., typical headset devices include reality augmentation glasses in an augmented reality device, the reality augmentation glasses being equivalent to VR glasses, AR glasses, XR glasses or other categories of displays. The server 104 may be implemented as a stand-alone server or a server cluster composed of multiple servers, or as a cloud computing platform.
In one embodiment, as shown in fig. 2, a picture displaying method is provided, and an augmented reality device in which the method is applied to the first user object in fig. 1 is taken as an example for explanation, and the method includes the following steps:
and 202, acquiring information of a real environment to obtain control data.
The real environment is a real environment where such real objects of each user are located, and may be an office scene, a lecture scene, a conference scene, or an environment divided by other user behavior events. By collecting information of the real environment, the real environment of different user objects and the real environment of the real environment where the user objects are located and the virtual environment of the virtual space can be associated, so that two scenes of the real world and the virtual space are connected based on control data.
The control data are used for controlling video acquisition equipment in the area where the augmented reality equipment is located, and information acquisition of one or more user objects is carried out, so that object information of each user object is obtained. It can be appreciated that, because the control data is collected by the augmented reality device, the control can be performed for the area covered by the display of the augmented reality device, so as to control the video collecting device in the corresponding area of the augmented reality device, perform video data collection, and obtain the video data that the display of the augmented reality device needs to be displayed.
Unlike traditional scheme, the augmented reality device mainly relates to a display, and one or more accessories such as a data glove, a force feedback device, a microphone, an earphone and the like are related, and the accessories such as the data glove and the like can input and output data based on the space displayed by the display; the video capturing device is not only independent of accessories other than the augmented reality device, but also is notable that the angle of the video captured by the video capturing device is completely different from the angle of the display of the augmented reality device for displaying data, if the video capturing device is used blindly for capturing images, the data cannot be applied to the display, but the video capturing device is controlled to capture object information by the control data connecting the real world with the two scenes of the virtual space, so that the information of different user objects can be compatible with the two scenes, thereby forming stereoscopic video data.
Step 204, acquiring a video frame of the first user object acquired by the video acquisition device according to the control data, and acquiring an avatar image of the second user object correspondingly displayed by the first user object according to the control data.
The first user object and the second user object are different real objects, and virtual character images of the second user object and the second user object are displayed corresponding to the first user object, so that the first user object and the second user object can at least interact across space. Optionally, the first user object is at least one user object that presents at least a video frame of itself, and the second user object is at least one user object that presents at least an avatar of itself, and the first user object and the second user object are displayed for the display based on the virtual space location.
Because the corresponding display relation exists between the first user object and the second user object, the relation is determined according to the control data, the relation is determined based on the fact that the real scene of the first user object is mapped to the first virtual space position in the virtual scene, the second user object in the corresponding range is selected, and then the virtual image of the second user object is displayed.
The video frame of the first user object at least comprises an object environment video frame and a local video frame of the first user object, wherein the object environment video frame comprises the information of the first user object in physical dimensions such as expression, action and the like, and is used for controlling the avatar image of the first user object; the local video frame is used for collecting information of a target part or other local area of the real environment so as to more accurately and reversely calculate the position and orientation information of the first user object.
For example: the method comprises the steps that respective object information of a user object A, a user object B, a user object C and a user object D is acquired for a human body or a human body part respectively, the user object A is displayed through a video frame of the user object A, the user object B is displayed through a video frame of the user object B, the user object C is displayed through an avatar of the user object C, and the user object D is displayed through an avatar of the user object D, if the user object A is taken as a first user object, the user object C and the user object D are respectively related to a second user object, namely the first user object, of the user object A, and the user object displayed corresponding to the user object A belongs to the first user object; based on a similar principle, if the user object B is taken as the first user object, the user object displayed corresponding to the user object B belongs to the second user object of the first user object, namely the user object B, from among the user object C and the user object D.
It is to be appreciated that the first user object may also have an avatar of its own, i.e., the avatar of the first user, and the second user object may also have a video frame of its own, i.e., the video frame of the second user object.
In one embodiment, the method further comprises: displaying an avatar of the first user object on a display of the augmented reality device according to a first virtual space position of the first user object in the virtual space; and controlling the avatar of the first user object to move according to the action information extracted from the video frame of the first user object in the process of displaying the avatar of the first user object.
The first virtual space position is a position mapped to the virtual space according to the position of the first user object in the real environment. The virtual character of the first user object is set through the first virtual space position, the interconnection equipment which can facilitate interaction with the first user object can receive the virtual character of the first user object, and virtual character information of the first user is displayed through a display of the interconnection equipment so as to perform interaction.
The interconnection device mainly refers to an augmented reality device for displaying the first user object, in addition to the augmented reality device of the first user object. For the interconnection device, the first user object in this embodiment is a candidate user object, and the second user object of the interconnection device needs to be selected from the candidate user objects according to the control data of the interconnection device, so as to implement object interconnection of different augmented reality devices.
The virtual character of the first user object moves according to the action information extracted from the first video frame, on one hand, the virtual character can transmit the information to be transmitted by the first user object, and on the other hand, the information range required to be transmitted by the first user object can be adaptively controlled, so that the information acquired by the interconnection equipment is the information required to be transmitted by the first user object, and the privacy of the user is protected.
In one embodiment, the method further comprises: according to a second virtual space position of a second user object in the virtual space, mapping is carried out according to a structural position relation between a target part of the second user object and the second user object, and a second video frame position is obtained; displaying a target portion of a second user object on a display of the augmented reality device at a second video frame location; and in the process of displaying the avatar of the second user object, the avatar of the second user object moves according to the action information extracted from the video frame of the second user object.
The second virtual space position is a position mapped to the virtual space according to the position of the second user object in the real environment. The virtual character of the second user object is set through the second virtual space position, the augmented reality equipment of the first user object can be facilitated, the virtual character of the second user object is received, and virtual character information of the second user is displayed through a display carried by the augmented reality equipment of the first user object, so that the interaction process of the first user object and the second user object can be displayed.
The virtual character of the second user object moves according to the action information extracted from the second video frame, on one hand, the virtual character can transmit the information to be transmitted by the second user object, and on the other hand, the information range required to be transmitted by the second user object can be adaptively controlled, so that the information acquired by the augmented reality equipment of the first user object is the information required to be transmitted by the second user object, and the privacy of the user is protected.
The structure position of the second object is a position where the second user object sets each part of the second user object in the virtual space. Optionally, the structural position of the second user object includes a position of a head of the screen, a position of a top half of the screen, a position of a hand of the screen, or other positions. When the target part of the second user object is the head part, the second video frame position of the second user object is positioned at the position of the head part of the picture in the process of displaying the picture by the display; when the target part of the second user object is the head and the upper body, the second video frame position of the second user object is positioned at the position of the head of the picture in the process of displaying the picture by the display, and so on.
The second video frame may be identical to the first video frame in data structure, the difference being that the first video frame is obtained by information acquisition for a first user object, the second video frame is obtained by information acquisition for a second user object, and the first video frame is located at a first video frame position in the display, and the second video frame is located at a second video frame position in the same display.
The target part of the second user object is displayed on the display of the augmented reality device, so that the target part of the second user object can be guaranteed to display more information, and the information required to be transmitted by the second user object can be guaranteed to be fully displayed from another angle.
In step 206, when the video frame of the first user object and the avatar of the second user object are correspondingly distributed to the augmented reality device for display, the video frame of the first user object and the avatar of the second user object are displayed on the display of the augmented reality device according to the positional relationship of the second user object relative to the first user object.
The corresponding distribution means that the video frame of the first user object and the virtual character image of the second user object are distributed to at least the augmented reality equipment of the first user so as to display the preset position through the augmented reality equipment of the first user; the video frame of the first user object and the virtual character image of the second user object can be distributed to the augmented reality equipment of the second user object for display so as to display the preset position through the augmented reality equipment of the second user object; in addition, when the first user object acts as a second user object for its interconnected devices, the presentation is also presented in a step-wise fashion.
The position relation of the second user object relative to the first user object is a virtual space position relation of the first user object and the second user object in a virtual space, and the virtual space position relation can be acquired based on the control data, so that the first user object and the second user object can be reflected more accurately.
The display of the augmented reality equipment comprises a left eye display mirror and a right eye display mirror of the reality augmented glasses, wherein the left eye display mirror and the right eye display mirror respectively project and display respective data to be displayed, and the data to be displayed is in stereoscopic vision; the stereoscopic video includes at least a video frame of the first user object and an avatar of the second user object, and may further include at least one video of the video frame of the second user object and the avatar of the first user object.
In one embodiment, displaying a video frame of a first user object and an avatar of a second user object on a display of an augmented reality device includes: receiving a stereoscopic video; the stereoscopic video is information for showing a plurality of objects, and the stereoscopic video at least comprises an avatar image of a second user object correspondingly displayed by a first user object, and at least one of the following information is also available: a video frame of the second user object and an avatar of the first user object; generating a left-eye dynamic texture and a right-eye dynamic texture based on the stereoscopic video; mapping the left-eye dynamic texture to a left grid, and mapping the right-eye dynamic texture to a right grid; taking the left grid as left-eye data to be displayed in the left-eye virtual camera, and taking the right grid as right-eye data to be displayed in the right-eye virtual camera; and projecting the left-eye data to be displayed to a left-eye display mirror of the augmented reality device, and projecting the right-eye data to be displayed to a right-eye display mirror of the augmented reality device so as to realize the stereoscopic vision of the stereoscopic vision video.
In another embodiment, the execution of the method is discussed through an application scenario; the method is applied to the augmented reality equipment, the video acquisition equipment is arranged outside the augmented reality equipment, the interconnection equipment exists in the augmented reality equipment which is positioned in the same scene with the video acquisition equipment, and the interconnection equipment acquires the information of the second user object, and the method specifically comprises the following steps: information acquisition is carried out on the real environment of the first user object, and control data required by augmented reality equipment used by the first user object is obtained; according to the control data, information acquisition is carried out on the overall action of the first user object, the target part of the first user object and the real environment where the first user object is located, so as to obtain the information of the first user object; and under the condition that the information of the second user object and the information of the first user object are distributed to the augmented reality equipment of the first user object for display, the first user object, the target part and the second user object are respectively displayed in a video of a real environment according to the position relation of the target parts of the second user object and the first user object relative to the first user object by the augmented reality equipment of the first user object.
Optionally, the video frame of the first user object, the avatar of the first user object, the video frame of the second user object, the avatar of the second user object are in the same universe, and different user objects may interact with at least one of the information in the avatar through the video frame.
In the picture display method, information acquisition is carried out on the real environment to obtain control data; acquiring a video frame of a first user object acquired by video acquisition equipment according to control data, and acquiring an virtual character image of a second user object correspondingly displayed by the first user object according to the control data, so that two scenes of a real world and a virtual space are connected based on the control data, and under the condition that the real space is communicated with the virtual space, the information of the two types of user objects is acquired through video acquisition equipment outside the augmented reality equipment, so that the acquired information quantity is increased, and the increased information quantity can be organically fused across the two scenes; under the condition that the video frame of the first user object and the virtual character image of the second user object are correspondingly distributed to the augmented reality equipment for display, the video frame of the first user object and the virtual character image of the second user object are displayed on a display of the augmented reality equipment according to the position relation of the second user object relative to the first user object; the information of the first user object is fully displayed through the video frame of the first user object, and the information of the second user object is selectively displayed through the virtual character image of the second user object, so that object interaction between the real world and the virtual space is realized, and more information is displayed.
In one embodiment, an augmented reality device includes a vision sensor and an inertial sensor; information acquisition is carried out on a real environment to obtain control data, and the method comprises the following steps: according to the sensor coupling relation between the visual sensor and the inertial navigation sensor, fusing the data acquired by the visual sensor and the inertial navigation sensor respectively to obtain fused sensor data; and determining the position and the orientation of the first user object in the real environment according to the fused sensor data.
Correspondingly, acquiring the video frame of the first user object acquired by the video acquisition device according to the control data, and acquiring the virtual character image of the second user object correspondingly displayed by the first user object according to the control data, wherein the method comprises the following steps: acquiring information acquired by a first video acquisition device according to the position and the orientation of a first user object to obtain a video frame of the first user object; mapping from the first user object position to a first virtual space position in the virtual space, and receiving an avatar of the virtual space within a preset range along the first user object direction to obtain an avatar of the second user object; the avatar of the second user object is generated based on the second video frame acquired by the second video acquisition device for the second user object.
According to the sensor coupling relation between the vision sensor and the inertial navigation sensor, the data acquired by the vision sensor and the inertial navigation sensor are fused, and the data in the two aspects can be mutually corrected and mutually complemented through the respective sensor data characteristics of the vision sensor and the inertial navigation sensor, so that the fused sensor data is more accurate, and the position and the orientation of the first user object are calculated more accurately.
The first user object position and the first user object orientation are control data for performing picture display in a real environment. Since the first user object position can be mapped into the virtual space, the above-mentioned first virtual space position of the first user object in the virtual space can be obtained, whereas the first user object orientation is used for screening out the second user object from the candidate user objects.
The first video acquisition device and the second video acquisition device are video acquisition devices of different augmented reality devices, and the two video acquisition devices can be located in the same area or different areas in the reality environment. The information collected by the first video collection device is primarily video frames of the first user object, and the information collected by the second video collection device is primarily video frames of the second user object. It can be appreciated that the first video capturing device and the second video capturing device may each include a plurality of capturing devices to form a stereoscopic video capturing device, and the control data of the second video capturing device may be different from the control data, where the control data of the first user object is used to clarify a picture displayed by the display.
Because the position of the first user object and the area indicated by the orientation of the first user object are in a range, the first video acquisition equipment can acquire the video from the front or side of the orientation of the first user object to the first user object, the accuracy requirement is not high, and the data processing efficiency can be ensured. Meanwhile, along the first user object orientation, receiving the virtual character image of the virtual space in the preset range, wherein the data acquisition in the specific range is carried out from the first user object orientation in the real environment, so that object interconnection in a certain application scene is formed, and the information quantity to be controlled is accurately displayed. The application scene can be a meeting, a classroom or other scenes when multi-object interaction is performed.
In another embodiment, the first video frame is a partial region of a first person video frame, the first person video frame referring to: the real environment facing the first user object is a video frame under the first user object observation environment; the information collected by the second video collecting device is mainly information of a second user object, which is called a video frame by a third person, and the third person is called a video frame rate, which refers to a video frame facing the second object and observing facial expression and limb actions of the second object. It can be appreciated that the first video capturing device and the second video capturing device may each include a plurality of capturing devices to form a stereoscopic video capturing device, and the control data collected by the second video capturing device may be different from the control data, but the control data of the first user object is used to calculate the position and the orientation of the first user object more accurately.
In one embodiment, obtaining, by the first video capturing device, a video frame of the first user object according to the information collected by the first user object position and the first user object orientation includes: under the condition that a first user object wears the augmented reality device, acquiring an object environment video frame within a preset range along the first user object orientation at the first user object position; when an object environment video frame exists, acquiring a target part video frame acquired by video acquisition equipment aiming at the position of a first user object; wherein the target site video frame is added to the object environment video frame.
Therefore, the method comprises the steps of firstly collecting an object environment video frame, and then constructing the whole information of the augmented reality equipment aiming at a local video frame which is a collected target part video frame; and the object environment video frames in the preset range are collected along the direction of the first user object, the first user object can display the background of the information on the object environment video frames, and under the condition that the object environment video frames exist, the obtained target part video frames are collected aiming at the position of the first user object so as to display the local information of the target part more accurately.
In one embodiment, displaying a video frame of a first user object and an avatar of a second user object on a display of an augmented reality device according to a positional relationship of the second user object with respect to the first user object, comprises: determining a first virtual space position of a first user object in a virtual space; selecting the first virtual space position according to the virtual space position relation between the second user object and the first user object to obtain a second virtual space position of the second user object in the virtual space; displaying a video frame of the first user object on a display of the augmented reality device according to the first virtual spatial location; and displaying the avatar of the second user object on the display according to the second virtual space position.
In one possible embodiment, determining a first virtual space position of a first user object in a virtual space comprises: determining a first real environment position of the first user object in the real environment according to the control data; according to the ID identification of the first user object, associating the first real environment position with the initial position of the first user object in the virtual space; and adjusting the initial position of the first user object according to the first real environment position to obtain a first virtual space position. Therefore, through the first real environment position, the initial position of the first user in the virtual space is regulated and controlled, so that the first virtual space position and the first real environment position have a dynamically-associated regulation relationship, the first user object in the embodiment is positioned by the interconnection device, and then information of the first user object is displayed by the interconnection device according to a result obtained by positioning the first user object by the interconnection device. It will be appreciated that for the interconnected devices, the first user object may be the second user object or may be a candidate user object for the display process.
And the virtual space position relation between the second user object and the first user object is used for selecting candidate virtual space positions in the virtual space according to the control data. Optionally, the virtual space position relationship refers to a condition achieved by determining a selection parameter according to an association relationship or a distance between the candidate virtual space position and the first virtual space position; and determining a second virtual space position in the candidate virtual space positions by using the condition achieved by the selection parameter.
The second virtual space position selected according to the first virtual space position is a process of pointedly selecting a second user object to display by taking the virtual space as a bridge, so that the virtual character image of the second user object is combined with the video frame of the first user object, and the interconnection behavior between the two user objects can show more information; the more information displayed by the method means that the first user object in the display reality environment is displayed through the video frame of the first user object so as to fully display the information required to be displayed by the first user object; the virtual character image of the second user object is displayed, so that the information selected by the second user object can be fully respected, and the display of the first user information can repeatedly carry out information.
In one embodiment, displaying a video frame of a first user object on a display of an augmented reality device according to a first virtual spatial location, comprises: mapping the first virtual space position according to the structural position relation between the target part of the first user object and the first user object to obtain a first video frame position; at a first video frame location, a target portion of a first user object is displayed on a display of an augmented reality device.
The structure position of the first user object is a position where the first user object sets each part of the first user object in the virtual space. Optionally, the structural position of the first user object includes a position of a head of a screen, a position of an upper body of the screen, a position of a hand of the screen, or other positions. When the target part of the first user object is the head part, the first video frame position of the first user object is positioned at the position of the head part of the picture in the process of displaying the picture by the display; when the target part of the first user object is the head and the upper body, the first video frame position of the first user object is positioned at the position of the head of the picture in the process of displaying the picture by the display, and so on.
The second video frame may be identical to the first video frame in data structure, the difference being that the first video frame is obtained by information acquisition for a first user object, the second video frame is obtained by information acquisition for a second user object, and the first video frame is located at a first video frame position in the display, and the second video frame is located at a second video frame position in the same display.
The target part of the first user object is displayed on the display of the augmented reality device, so that the target part of the first user object can be ensured to display more information, and the information required to be transmitted by the second user object can be ensured to be fully displayed. And the environment video frames of the first user object are not completely displayed and processed, so that the safety and privacy of the first user object can be ensured to a certain extent.
In one embodiment, the description is made from a system of components that are viewable by a user. In application scenes such as a video stream conference system, a video can be seen only through a computer or mobile phone screen, a three-dimensional virtual character image cannot be seen, interaction between a real environment and a virtual space is a gap, effective interaction between remote participants is lacked, a stereoscopic video cannot be seen, and a stereoscopic picture conforming to stereoscopic vision of human eyes cannot be presented.
The framework relates to four components, wherein the first component is a plurality of distributed mobile terminal-XR glasses equipment, namely augmented reality equipment; the second component is a plurality of distributed mobile terminals, namely three-dimensional video acquisition equipment, namely video acquisition equipment; the third component is a server of a cloud network architecture and a service program and is used for interaction and data transmission, and the third component can also be a distributed system, a network or other terminals, and can also be one device of the first component and the second component; the fourth component is a terminal or a server of a mobile terminal rendering and interaction program; the fourth component may be a device that is coupled to one of the first component and the second component. The first component and the second component are distributed in different places; the fourth component realizes graphic rendering and is responsible for presentation and interaction; the first component, the second component and the fourth component construct an interconnected metauniverse space-a social space under the overall coordination of the fourth component.
The first component, the augmented reality device is shown in fig. 3, is configured with a visual sensor and an inertial sensor, and cooperates with a tightly coupled sensor algorithm to realize space positioning capability, so that environment information of a wearer is perceived in real time through corresponding control data, and the space position and orientation of the first user object of the wearer are determined. And, the visual sensor of the augmented reality device may be a subset of the first video frame acquisition device.
Optionally, as an entrance for accessing the virtual environment from the real environment, the XR glasses of the augmented reality device are further configured with a high-transparency high-definition 4K semi-reflective semi-transparent optical perspective display lens with resolution up to 3840x1920, so that the user can see the virtual target of the augmented reality while seeing the real environment, wherein the virtual target comprises an avatar image of the user (first user object) and a corresponding first video frame in the social space, and an avatar image of at least one social partner (second user object) in the social space, and a second video frame selectively displayed by each social partner, and at this time, the first video frame and the second video frame are simultaneously displayed in the three-dimensional virtual space in a three-dimensional grid form with the three-dimensional avatar image, and are fused and overlapped in the real environment to form an augmented reality effect; and the first video frame and the second video frame are optionally obtained through the stereoscopic video acquisition device, so that stereoscopic visual effects can be presented together with the three-dimensional virtual person through the display lens. In addition, XR glasses still have adjustable myopia degree, high sense of reality, wear uncomfortable comfortable effect for a long time to fully demonstrate corresponding information.
Wherein the avatar is as shown in fig. 4, which may be a partial avatar; the avatar of the first user object and the first video frame perform a waving motion, as shown in fig. 5.
The second component in the architecture, namely the first video acquisition device and/or the second video acquisition device, may include a plurality of distributed stereoscopic video acquisition devices, be equipped with a camera for high-definition stereoscopic video acquisition, and face the participant to acquire the voice and video stream information of the participant in real time and upload the voice and video stream information to the cloud for processing.
The third component in the architecture may be a cloud network architecture and a service program deployed on the cloud computing platform, or may be a component end-cloud integrated with the first component and the second component. And the third component runs a cloud processing thread, processes AI work such as registration, management, video stream pushing, distribution, data transmission, extraction of human body actions and the like of multiple users, and transmits the processed information to different XR (X-ray) glasses users, so that cyclic interaction is formed. The data transmission is shown in fig. 6.
The fourth component in the architecture can be an external mobile phone or a built-in chip for data transmission with the first component and the second component, which is called a mobile terminal of rendering and interaction program for short; the mobile terminal processes registration and login of the user terminal, access of the metauniverse, end side rendering and user interaction, so that face-to-face interaction can be carried out between multiple users in a classroom through virtual human images, meanwhile, under the condition that distributed three-dimensional video acquisition equipment is selected, each user can see real three-dimensional video of a remote user in the metauniverse space, and a corresponding virtual person can simulate real-time actions of a user object such as a real person. The mobile terminal completes graphic rendering functions including human body action real-time animation, and a user can see that respective virtual persons of classmates and teachers act with real-time real actions of real persons in glasses, including social actions such as hand lifting, hand waving, handshake and the like. The process of generating the avatar is shown in fig. 7, and the interactive relationship of the actions is shown in fig. 8.
In one embodiment, the metadata social space, the end side utilizes XR glasses, the cloud side utilizes a cloud platform, an end cloud integration is formed, and multiple people access the metadata social space, and an effective framework of real-time interaction can be used for but not limited to: wisdom teaching, online meeting, online social etc. multiple occasion.
It should be understood that, although the steps in the flowcharts related to the embodiments described above are sequentially shown as indicated by arrows, these steps are not necessarily sequentially performed in the order indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in the flowcharts described in the above embodiments may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of the steps or stages is not necessarily performed sequentially, but may be performed alternately or alternately with at least some of the other steps or stages.
Based on the same inventive concept, the embodiment of the application also provides a picture display device for realizing the picture display method. The implementation of the solution provided by the device is similar to that described in the above method, so the specific limitation of one or more embodiments of the display device provided below may be referred to the limitation of the display method hereinabove, and will not be repeated here.
In one embodiment, as shown in fig. 9, there is provided a picture display device including:
the control data acquisition module 902 is configured to acquire information from a real environment to obtain control data;
the object information acquisition module 904 is configured to acquire a video frame of a first user object acquired by the video acquisition device according to the control data, and acquire an avatar image of a second user object correspondingly displayed by the first user object according to the control data;
and the object display module 906 is configured to display, when the video frame of the first user object and the avatar of the second user object are correspondingly distributed to the augmented reality device for display, the video frame of the first user object and the avatar of the second user object on a display of the augmented reality device according to the positional relationship of the second user object relative to the first user object.
In one embodiment, the augmented reality device comprises a vision sensor and an inertial sensor; the control data acquisition module 902 is configured to:
according to the sensor coupling relation between the visual sensor and the inertial navigation sensor, fusing the data respectively acquired by the visual sensor and the inertial navigation sensor to obtain fused sensor data;
determining a first user object position and a first user object orientation in a real environment according to the fused sensor data;
the object information collection module 904 is configured to:
acquiring information acquired by a first video acquisition device according to the position of the first user object and the orientation of the first user object, and acquiring a video frame of the first user object;
mapping the first user object position to a first virtual space position in a virtual space, and receiving an avatar of the virtual space within a preset range along the first user object direction to obtain an avatar of the second user object; and the virtual character image of the second user object is generated based on a second video frame acquired by second video acquisition equipment for the second user object.
In one embodiment, the object information acquisition module 904 is configured to:
acquiring an object environment video frame within a preset range along the first user object direction at the first user object position under the condition that the first user object wears the augmented reality device;
acquiring a target part video frame acquired by the video acquisition equipment aiming at the position of the first user object under the condition that the object environment video frame exists;
wherein the target site video frame is added to the object environment video frame.
In one embodiment, the object display module 906 is configured to:
determining a first virtual space position of the first user object in a virtual space;
selecting the first virtual space position according to the virtual space position relation between the second user object and the first user object to obtain a second virtual space position of the second user object in the virtual space;
displaying the video frame of the first user object on a display of the augmented reality device according to the first virtual spatial position;
and displaying the avatar of the second user object on the display according to the second virtual space position.
In one embodiment, the object display module 906 is configured to:
mapping the first virtual space position according to the structural position relation between the target part of the first user object and the first user object to obtain the first video frame position;
and displaying the target part of the first user object on a display of the augmented reality device at the first video frame position.
In one embodiment, the object display module 906 is configured to:
displaying an avatar of the first user object on a display of the augmented reality device according to a first virtual space position of the first user object in a virtual space;
and controlling the avatar of the first user object to move according to the action information extracted from the video frame of the first user object in the process of displaying the avatar of the first user object.
In one embodiment, the object display module 906 is configured to:
according to a second virtual space position of the second user object in the virtual space, mapping according to a structural position relation between a target part of the second user object and the second user object, and obtaining a second video frame position;
Displaying a target portion of the second user object on a display of the augmented reality device at the second video frame location;
and controlling the avatar of the second user object to move according to the action information extracted from the video frame of the second user object in the process of displaying the avatar of the second user object.
The above-described respective modules in the picture display device may be implemented in whole or in part by software, hardware, and combinations thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided, which may be a terminal, and an internal structure diagram thereof may be as shown in fig. 10. The computer device includes a processor, a memory, an input/output interface, a communication interface, a display unit, and an input means. The processor, the memory and the input/output interface are connected through a system bus, and the communication interface, the display unit and the input device are connected to the system bus through the input/output interface. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The input/output interface of the computer device is used to exchange information between the processor and the external device. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless mode can be realized through WIFI, a mobile cellular network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement a picture presentation method. The display unit of the computer equipment is used for forming a visual picture, and can be a display screen, a projection device or a virtual reality imaging device, wherein the display screen can be a liquid crystal display screen or an electronic ink display screen, the input device of the computer equipment can be a touch layer covered on the display screen, can also be a key, a track ball or a touch pad arranged on a shell of the computer equipment, and can also be an external keyboard, a touch pad or a mouse and the like.
It will be appreciated by those skilled in the art that the structure shown in fig. 10 is merely a block diagram of some of the structures associated with the present application and is not limiting of the computer device to which the present application may be applied, and that a particular computer device may include more or fewer components than shown, or may combine certain components, or have a different arrangement of components.
In an embodiment, there is also provided a computer device comprising a memory and a processor, the memory having stored therein a computer program, the processor implementing the steps of the method embodiments described above when the computer program is executed.
In one embodiment, a computer-readable storage medium is provided, on which a computer program is stored which, when executed by a processor, carries out the steps of the method embodiments described above.
In an embodiment, a computer program product is provided, comprising a computer program which, when executed by a processor, implements the steps of the method embodiments described above.
It should be noted that, the user information (including, but not limited to, user equipment information, user personal information, etc.) and the data (including, but not limited to, data for analysis, stored data, presented data, etc.) referred to in the present application are information and data authorized by the user or sufficiently authorized by each party, and the collection, use and processing of the related data are required to comply with the related laws and regulations and standards of the related countries and regions.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, database, or other medium used in the various embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high density embedded nonvolatile Memory, resistive random access Memory (ReRAM), magnetic random access Memory (Magnetoresistive Random Access Memory, MRAM), ferroelectric Memory (Ferroelectric Random Access Memory, FRAM), phase change Memory (Phase Change Memory, PCM), graphene Memory, and the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory, and the like. By way of illustration, and not limitation, RAM can be in the form of a variety of forms, such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), and the like. The databases referred to in the various embodiments provided herein may include at least one of relational databases and non-relational databases. The non-relational database may include, but is not limited to, a blockchain-based distributed database, and the like. The processors referred to in the embodiments provided herein may be general purpose processors, central processing units, graphics processors, digital signal processors, programmable logic units, quantum computing-based data processing logic units, etc., without being limited thereto.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples only represent a few embodiments of the present application, which are described in more detail and are not to be construed as limiting the scope of the present application. It should be noted that it would be apparent to those skilled in the art that various modifications and improvements could be made without departing from the spirit of the present application, which would be within the scope of the present application. Accordingly, the scope of protection of the present application shall be subject to the appended claims.

Claims (10)

1. A picture presentation method, the method comprising:
information acquisition is carried out on the real environment to obtain control data;
acquiring a video frame of a first user object acquired by video acquisition equipment according to the control data, and acquiring an avatar of a second user object correspondingly displayed by the first user object according to the control data;
And under the condition that the video frames of the first user object and the avatar of the second user object are correspondingly distributed to the augmented reality equipment for display, displaying the video frames of the first user object and the avatar of the second user object on a display of the augmented reality equipment according to the position relation of the second user object relative to the first user object.
2. The method of claim 1, wherein the augmented reality device comprises a vision sensor and an inertial sensor; the information acquisition is performed on the real environment to obtain control data, and the method comprises the following steps:
according to the sensor coupling relation between the visual sensor and the inertial navigation sensor, fusing the data respectively acquired by the visual sensor and the inertial navigation sensor to obtain fused sensor data;
determining a first user object position and a first user object orientation in a real environment according to the fused sensor data;
the video acquisition device acquires a video frame of a first user object acquired according to the control data, and acquires an avatar of a second user object correspondingly displayed by the first user object according to the control data, and the method comprises the following steps:
Acquiring information acquired by a first video acquisition device according to the position of the first user object and the orientation of the first user object, and acquiring a video frame of the first user object;
mapping the first user object position to a first virtual space position in a virtual space, and receiving an avatar of the virtual space within a preset range along the first user object direction to obtain an avatar of the second user object; and the virtual character image of the second user object is generated based on a second video frame acquired by second video acquisition equipment for the second user object.
3. The method according to claim 2, wherein the obtaining the video frame of the first user object by the first video capturing device according to the information that the first user object position and the first user object orientation are captured includes:
acquiring an object environment video frame within a preset range along the first user object direction at the first user object position under the condition that the first user object wears the augmented reality device;
acquiring a target part video frame acquired by the video acquisition equipment aiming at the position of the first user object under the condition that the object environment video frame exists;
Wherein the target site video frame is added to the object environment video frame.
4. The method of claim 1, wherein displaying the video frame of the first user object and the avatar of the second user object on the display of the augmented reality device according to the positional relationship of the second user object with respect to the first user object comprises:
determining a first virtual space position of the first user object in a virtual space;
selecting the first virtual space position according to the virtual space position relation between the second user object and the first user object to obtain a second virtual space position of the second user object in the virtual space;
displaying the video frame of the first user object on a display of the augmented reality device according to the first virtual spatial position;
and displaying the avatar of the second user object on the display according to the second virtual space position.
5. The method of claim 4, wherein displaying the video frame of the first user object on the display of the augmented reality device according to the first virtual spatial location comprises:
Mapping the first virtual space position according to the structural position relation between the target part of the first user object and the first user object to obtain the first video frame position;
and displaying the target part of the first user object on a display of the augmented reality device at the first video frame position.
6. The method according to claim 1, wherein the method further comprises:
displaying an avatar of the first user object on a display of the augmented reality device according to a first virtual space position of the first user object in a virtual space;
and controlling the avatar of the first user object to move according to the action information extracted from the video frame of the first user object in the process of displaying the avatar of the first user object.
7. The method according to claim 1, wherein the method further comprises:
according to a second virtual space position of the second user object in the virtual space, mapping according to a structural position relation between a target part of the second user object and the second user object, and obtaining a second video frame position;
Displaying a target portion of the second user object on a display of the augmented reality device at the second video frame location;
and controlling the avatar of the second user object to move according to the action information extracted from the video frame of the second user object in the process of displaying the avatar of the second user object.
8. A picture display device, the device comprising:
the control data acquisition module is used for acquiring information of the real environment to obtain control data;
the object information acquisition module is used for acquiring a video frame of a first user object acquired by video acquisition equipment according to the control data and acquiring an avatar image of a second user object correspondingly displayed by the first user object according to the control data;
and the object display module is used for displaying the video frames of the first user object and the avatar of the second user object on a display of the augmented reality device according to the position relation of the second user object relative to the first user object under the condition that the video frames of the first user object and the avatar of the second user object are correspondingly distributed to the augmented reality device for display.
9. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any of claims 1 to 7 when the computer program is executed.
10. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 7.
CN202310615359.9A 2023-05-29 2023-05-29 Picture display method, device, computer equipment and storage medium Pending CN116560511A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310615359.9A CN116560511A (en) 2023-05-29 2023-05-29 Picture display method, device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310615359.9A CN116560511A (en) 2023-05-29 2023-05-29 Picture display method, device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116560511A true CN116560511A (en) 2023-08-08

Family

ID=87494543

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310615359.9A Pending CN116560511A (en) 2023-05-29 2023-05-29 Picture display method, device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116560511A (en)

Similar Documents

Publication Publication Date Title
US11838518B2 (en) Reprojecting holographic video to enhance streaming bandwidth/quality
CN110954083B (en) Positioning of mobile devices
CN107924584B (en) Augmented reality
Scarfe et al. Using high-fidelity virtual reality to study perception in freely moving observers
US20190244413A1 (en) Virtual viewpoint for a participant in an online communication
US9654734B1 (en) Virtual conference room
US10234939B2 (en) Systems and methods for a plurality of users to interact with each other in augmented or virtual reality systems
Kurillo et al. 3D teleimmersion for collaboration and interaction of geographically distributed users
KR20220125358A (en) Systems, methods and media for displaying real-time visualizations of physical environments in artificial reality
JP2023513980A (en) Synthesis of shots of the speaker on the screen
US10955911B2 (en) Gazed virtual object identification module, a system for implementing gaze translucency, and a related method
CN108830944B (en) Optical perspective three-dimensional near-to-eye display system and display method
WO2023169283A1 (en) Method and apparatus for generating binocular stereoscopic panoramic image, device, storage medium, and product
Saggio et al. Augmented reality for restoration/reconstruction of artefacts with artistic or historical value
CN111602391B (en) Method and apparatus for customizing a synthetic reality experience from a physical environment
JP2018007180A (en) Image display device, image display method and image display program
US11727645B2 (en) Device and method for sharing an immersion in a virtual environment
CN116560511A (en) Picture display method, device, computer equipment and storage medium
JP2023549657A (en) 3D video conferencing system and method for displaying stereoscopic rendered image data captured from multiple viewpoints
CN116582660A (en) Video processing method and device for augmented reality and computer equipment
US20230298250A1 (en) Stereoscopic features in virtual reality
JP2023092729A (en) Communication device, communication system, display method, and program
JP2023134220A (en) Communication system, communication management server, communication management method, and program
Jaimini et al. Augmented reality cognitive paradigm
CN116848507A (en) Application program screen projection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination