CN116664805B - Multimedia display system and method based on augmented reality technology - Google Patents

Multimedia display system and method based on augmented reality technology Download PDF

Info

Publication number
CN116664805B
CN116664805B CN202310665058.7A CN202310665058A CN116664805B CN 116664805 B CN116664805 B CN 116664805B CN 202310665058 A CN202310665058 A CN 202310665058A CN 116664805 B CN116664805 B CN 116664805B
Authority
CN
China
Prior art keywords
user
virtual image
physical space
intelligent terminal
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310665058.7A
Other languages
Chinese (zh)
Other versions
CN116664805A (en
Inventor
张鹏
林升亮
郭真
罗嘉康
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Laichuangyun Information Technology Co ltd
Original Assignee
Shenzhen Laichuangyun Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Laichuangyun Information Technology Co ltd filed Critical Shenzhen Laichuangyun Information Technology Co ltd
Priority to CN202310665058.7A priority Critical patent/CN116664805B/en
Publication of CN116664805A publication Critical patent/CN116664805A/en
Application granted granted Critical
Publication of CN116664805B publication Critical patent/CN116664805B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts

Abstract

The invention provides a multimedia display system and a multimedia display method based on an augmented reality technology. According to the scheme, the augmented reality technology is utilized, so that an immersive interactive environment can be created for a plurality of users in different places, personalized setting can be performed according to the characteristics of the users and the characteristics of physical space, and user experience is greatly improved.

Description

Multimedia display system and method based on augmented reality technology
Technical Field
The invention relates to the technical field of augmented reality, in particular to a multimedia display system and method based on the augmented reality technology.
Background
The augmented reality (Augmented Reality, AR) technology is a technology for skillfully fusing virtual information with a real world, and widely uses various technical means such as multimedia, three-dimensional modeling, real-time tracking and registration, intelligent interaction, sensing and the like, and applies virtual information such as characters, images, three-dimensional models, music, videos and the like generated by a computer to the real world after simulation, wherein the two kinds of information are mutually complemented, so that the enhancement of the real world is realized. In the existing scheme, the immersive experience of the user cannot be well given in the process of displaying the multimedia, the intelligent and flexible performance is not enough, and a multimedia display scheme based on the augmented reality technology is needed to improve the use experience of the user.
Disclosure of Invention
The invention is based on the problems, and provides a multimedia display system and a multimedia display method based on an augmented reality technology.
In view of this, an aspect of the present invention proposes a multimedia presentation system based on augmented reality technology, comprising: the system comprises a first intelligent terminal, a second intelligent terminal, a third intelligent terminal and a first server; wherein,
the first server is configured to:
communication connection among a user A, a user B and a user C is respectively established through the first intelligent terminal, the second intelligent terminal and the third intelligent terminal;
acquiring first three-dimensional data of a first physical space where the user A is located, and modeling to obtain a first three-dimensional space;
acquiring second three-dimensional data of a second physical space where the user B is located, and modeling to obtain a second three-dimensional space;
acquiring third three-dimensional data of a third physical space where the user C is located, and modeling to obtain a third three-dimensional space;
The first intelligent terminal is configured to: projecting a first user B virtual image of the user B and a first user C virtual image of the user C in the first physical space, and receiving first user A interaction data of the user A, the first user B virtual image and the first user C virtual image;
the second intelligent terminal is configured to: projecting a first user A virtual image of the user A and a second user C virtual image of the user C in the second physical space, and receiving first user B interaction data of the user B, the first user A virtual image and the second user C virtual image;
the third intelligent terminal is configured to: and projecting a second user A virtual image of the user A and a second user B virtual image of the user B in the third physical space, and receiving first user C interaction data of the user C, the second user A virtual image and the second user B virtual image.
Optionally, in the step of projecting the first user B virtual image of the user B and the first user C virtual image of the user C in the first physical space, the first intelligent terminal is specifically configured to:
Obtaining first physical space characteristic data of the first physical space from the first three-dimensional space, and obtaining first user A characteristic data of the user A, first user B characteristic data of the user B and first user C characteristic data of the user C;
generating the first user B virtual image and the first user C virtual image according to the first physical space characteristic data, the first user A characteristic data, the first user B characteristic data and the first user C characteristic data respectively;
and projecting the first user B virtual image and the first user C virtual image in the first physical space.
Optionally, the first intelligent terminal is configured to:
receiving a first user A projection image model and/or a second user A projection image model, a first user A interaction model and/or a second user A interaction model selected by the user A in the second physical space and/or the third physical space;
and establishing connection between the user A and the second physical space and/or the third physical space for the user A to control and interact.
Optionally, the first intelligent terminal is configured to:
And generating a corresponding first virtual object image and a first virtual effect image according to the first user A interaction data among the user A, the user B and the user C, and projecting the first virtual object image and the first virtual effect image in the first physical space.
Optionally, the first server is further configured to:
according to different scenes, respectively carrying out subspace division on the first physical space, the second physical space and the third physical space based on the first three-dimensional space, the second three-dimensional space and the third three-dimensional space, and establishing subspace-scene correspondence between each subspace and each scene;
determining a first relation among the user A, the user B and the user C according to the first user A interaction data, the first user B interaction data and the first user C interaction data;
determining a first interaction scene among the user A, the user B and the user C according to the first relation;
determining corresponding first subspaces, second subspaces and third subspaces in the first physical space, the second physical space and the third physical space according to the first interaction scene and the subspace-scene corresponding relation;
And controlling the first intelligent terminal, the second intelligent terminal and the third intelligent terminal to project the first user B virtual image and the first user C virtual image, the first user A virtual image and the second user C virtual image, and the second user A virtual image and the second user B virtual image in the first subspace, the second subspace and the third subspace respectively.
Another aspect of the present invention provides a multimedia presentation method based on an augmented reality technology, which is applied to a multimedia presentation system based on an augmented reality technology, wherein the multimedia presentation system based on an augmented reality technology includes a first intelligent terminal, a second intelligent terminal, a third intelligent terminal and a first server; the multimedia presentation method based on the augmented reality technology comprises the following steps:
communication connection among a user A, a user B and a user C is respectively established through the first intelligent terminal, the second intelligent terminal and the third intelligent terminal;
the first server acquires first three-dimensional data of a first physical space where the user A is located, and models the first three-dimensional data to obtain a first three-dimensional space;
the first server acquires second three-dimensional data of a second physical space where the user B is located, and models the second three-dimensional data to obtain a second three-dimensional space;
The first server obtains third three-dimensional data of a third physical space where the user C is located, and models the third three-dimensional data to obtain a third three-dimensional space;
the first intelligent terminal projects a first user B virtual image of the user B and a first user C virtual image of the user C in the first physical space, and receives first user A interaction data of the user A, the first user B virtual image and the first user C virtual image;
the second intelligent terminal projects a first user A virtual image of the user A and a second user C virtual image of the user C in the second physical space, and receives first user B interaction data of the user B, the first user A virtual image and the second user C virtual image;
and the third intelligent terminal projects a second user A virtual image of the user A and a second user B virtual image of the user B in the third physical space, and receives interaction data of the user C and a first user C of the second user A virtual image and the second user B virtual image.
Optionally, the step of projecting, by the first intelligent terminal, the first user B virtual image of the user B and the first user C virtual image of the user C in the first physical space includes:
The first intelligent terminal obtains first physical space feature data of the first physical space from the first three-dimensional space, and obtains first user A feature data of the user A, first user B feature data of the user B and first user C feature data of the user C;
generating the first user B virtual image and the first user C virtual image according to the first physical space characteristic data, the first user A characteristic data, the first user B characteristic data and the first user C characteristic data respectively;
and projecting the first user B virtual image and the first user C virtual image in the first physical space.
Optionally, the method further comprises:
the user A selects a first user A projection image model and/or a second user A projection image model, a first user A interaction model and/or a second user A interaction model which are/is in the second physical space and/or the third physical space on the first intelligent terminal;
and the user A establishes connection with the second physical space and/or the third physical space through the first intelligent terminal so as to control and interact.
Optionally, the method further comprises:
The first intelligent terminal generates a corresponding first virtual object image and a first virtual effect image according to the first user A interaction data among the user A, the user B and the user C, and projects the first virtual object image and the first virtual effect image in the first physical space.
Optionally, the method further comprises:
the first server respectively performs subspace division on the first physical space, the second physical space and the third physical space based on the first three-dimensional space, the second three-dimensional space and the third three-dimensional space according to different scenes, and establishes subspace-scene correspondence between each subspace and each scene;
the first server determines a first relation among the user A, the user B and the user C according to the first user A interaction data, the first user B interaction data and the first user C interaction data;
determining a first interaction scene among the user A, the user B and the user C according to the first relation;
determining corresponding first subspaces, second subspaces and third subspaces in the first physical space, the second physical space and the third physical space according to the first interaction scene and the subspace-scene corresponding relation;
And respectively projecting the first user B virtual image and the first user C virtual image, the first user A virtual image and the second user C virtual image, and the second user A virtual image and the second user B virtual image in the first subspace, the second subspace and the third subspace.
By adopting the technical scheme of the invention, the multimedia display system based on the augmented reality technology is provided with the first intelligent terminal, the second intelligent terminal, the third intelligent terminal and the first server. According to the scheme, the augmented reality technology is utilized, so that an immersive interactive environment can be created for a plurality of users in different places, personalized setting can be performed according to the characteristics of the users and the characteristics of physical space, and user experience is greatly improved.
Drawings
FIG. 1 is a schematic block diagram of a multimedia presentation system based on augmented reality technology provided by one embodiment of the present invention;
fig. 2 is a flowchart of a multimedia presentation method based on an augmented reality technology according to an embodiment of the present invention.
Detailed Description
In order that the above-recited objects, features and advantages of the present invention will be more clearly understood, a more particular description of the invention will be rendered by reference to the appended drawings and appended detailed description. It should be noted that, in the case of no conflict, the embodiments of the present application and the features in the embodiments may be combined with each other.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention, however, the present invention may be practiced otherwise than as described herein, and therefore the scope of the present invention is not limited to the specific embodiments disclosed below.
The terms first, second and the like in the description and in the claims of the present application and in the above-described figures, are used for distinguishing between different objects and not for describing a particular sequential order. Furthermore, the terms "comprise" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those listed steps or elements but may include other steps or elements not listed or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the present application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments.
A multimedia presentation system and method based on augmented reality technology according to some embodiments of the present invention are described below with reference to fig. 1 to 2.
As shown in fig. 1, one embodiment of the present invention provides a multimedia presentation system based on augmented reality technology, including: the system comprises a first intelligent terminal, a second intelligent terminal, a third intelligent terminal and a first server; wherein,
the first server is configured to:
communication connection among a user A, a user B and a user C is respectively established through the first intelligent terminal, the second intelligent terminal and the third intelligent terminal;
acquiring first three-dimensional data of a first physical space where the user A is located, and modeling to obtain a first three-dimensional space;
acquiring second three-dimensional data of a second physical space where the user B is located, and modeling to obtain a second three-dimensional space;
acquiring third three-dimensional data of a third physical space where the user C is located, and modeling to obtain a third three-dimensional space;
the first intelligent terminal is configured to: projecting a first user B virtual image of the user B and a first user C virtual image of the user C in the first physical space, and receiving first user A interaction data of the user A, the first user B virtual image and the first user C virtual image;
The second intelligent terminal is configured to: projecting a first user A virtual image of the user A and a second user C virtual image of the user C in the second physical space, and receiving first user B interaction data of the user B, the first user A virtual image and the second user C virtual image;
the third intelligent terminal is configured to: and projecting a second user A virtual image of the user A and a second user B virtual image of the user B in the third physical space, and receiving first user C interaction data of the user C, the second user A virtual image and the second user B virtual image.
It may be appreciated that in the embodiment of the present invention, the first intelligent terminal, the second intelligent terminal, and the third intelligent terminal may be intelligent terminals having functional modules such as a communication module, a control processing module, a projection module, and a sensing module. The user A establishes communication connection through the first intelligent terminal, the user B establishes communication connection through the second intelligent terminal and the user C establishes communication connection through the third intelligent terminal respectively; the first server acquires first three-dimensional data (such as three-dimensional point cloud data) of a first physical space where the user A is located, and models according to the first three-dimensional data to obtain a first three-dimensional space; acquiring second three-dimensional data of a second physical space where the user B is located, and modeling according to the second three-dimensional data to obtain a second three-dimensional space; and acquiring third three-dimensional data of a third physical space where the user C is located, and modeling according to the third three-dimensional data to obtain a third three-dimensional space.
The first intelligent terminal receives user B related data sent by the second intelligent terminal and user C related data sent by the third intelligent terminal, projects a first user B virtual image of the user B and a first user C virtual image of the user C in the first physical space according to the user B related data and the user C related data, and receives first user A interaction data of the user A, the first user B virtual image and the first user C virtual image; in order to enable the virtual image of the B/C in the first physical space to be matched with the characteristics of the user A, further, the first user B virtual image of the user B and the first user C virtual image of the user C are projected in the first physical space according to the user A characteristic data, the user B characteristic data and the user C characteristic data.
The second intelligent terminal receives user A related data sent by the first intelligent terminal and user C related data sent by the third intelligent terminal, projects a first user A virtual image of the user A and a second user C virtual image of the user C in the second physical space according to the user A related data and the user C related data, and receives first user B interaction data of the user B, the first user A virtual image and the second user C virtual image; in order to enable the virtual image of the A/C in the second physical space to be matched with the characteristics of the user B, further, a first user A virtual image of the user A and a second user C virtual image of the user C are projected in the second physical space according to the user A characteristic data, the user B characteristic data and the user C characteristic data.
The third intelligent terminal receives user A related data sent by the first intelligent terminal and user B related data sent by the second intelligent terminal, projects a second user A virtual image of the user A and a second user B virtual image of the user B in the third physical space according to the user A related data and the user B related data, and receives first user C interaction data of the user C, the second user A virtual image and the second user B virtual image; in order to enable the virtual image of the A/B in the third physical space to be matched with the characteristic of the user C, further, a second user A virtual image of the user A and a second user B virtual image of the user B are projected in the third physical space according to the characteristic data of the user A, the characteristic data of the user B and the characteristic data of the user C.
Wherein, the user A characteristic data/user B characteristic data/user C characteristic data comprises but is not limited to character image data, character voice data, character action data, character data, character basic physiological data and the like of the user A/B/C. The virtual image generation process of the user A specifically comprises the following steps: firstly, generating an A three-dimensional virtual image of a user A according to image data of the character of the user A; then, modifying the A three-dimensional virtual image according to the association relation or matching requirement between the characteristic data of the user A and/or the user B and/or the user C; and then, modifying the modified A three-dimensional virtual image again (for example, respectively projecting the modified A three-dimensional virtual image into the second physical space and the third physical space according to the characteristic data (including but not limited to the second physical space and/or the third physical space), so as to enable the virtual image of the end user A to be matched with the physical space (for example, the second physical space and/or the third physical space). Similarly, the principle of the virtual image generation flow of the user B/C is the same as that of the user A.
According to the scheme, the augmented reality technology is utilized, so that an immersive interactive environment can be created for a plurality of users in different places, personalized setting can be performed according to the characteristics of the users and the characteristics of physical space, and user experience is greatly improved.
It should be noted that the block diagram of the multimedia presentation system based on augmented reality technology shown in fig. 1 is only schematic, and the number of the illustrated modules does not limit the scope of the present invention.
In some possible embodiments of the present invention, in the step of projecting the first user B virtual image of the user B and the first user C virtual image of the user C in the first physical space, the first intelligent terminal is specifically configured to:
obtaining first physical space characteristic data of the first physical space from the first three-dimensional space, and obtaining first user A characteristic data of the user A, first user B characteristic data of the user B and first user C characteristic data of the user C;
generating the first user B virtual image and the first user C virtual image according to the first physical space characteristic data, the first user A characteristic data, the first user B characteristic data and the first user C characteristic data respectively;
And projecting the first user B virtual image and the first user C virtual image in the first physical space.
It can be understood that, in order to make the virtual images of the users completely fused with the projected physical space and the users in the physical space to reach harmony, in this embodiment, the first intelligent terminal obtains first physical space feature data (including, but not limited to, functions, structures, decorations, scenes, object placement states, etc. of the physical space) of the first physical space from the first three-dimensional space, and obtains first user a feature data of the user a, first user B feature data of the user B, and first user C feature data of the user C; respectively generating the first user B virtual image and the first user C virtual image according to the matching relationship and the incidence relationship among the first physical space feature data, the first user A feature data, the first user B feature data and the first user C feature data; and projecting the first user B virtual image and the first user C virtual image in the first physical space. Similarly, the principle of the second intelligent terminal projecting the first user a virtual image of the user a and the second user C virtual image of the user C in the second physical space, and the third intelligent terminal projecting the second user a virtual image of the user a and the second user B virtual image of the user B in the third physical space are the same as the principle of the first intelligent terminal projecting the first user B virtual image of the user B and the first user C virtual image of the user C in the first physical space, which are not described herein again.
Further, in order to increase the interestingness of the physical space, the first/second/third intelligent terminal may further manage and reconstruct the first/second/third three-dimensional space corresponding to the first/second/third physical space according to the feature data of the user a/user B/user C, and then project the increased virtual image in the first/second/third physical space.
In some possible embodiments of the present invention, the first intelligent terminal is configured to:
receiving a first user A projection image model and/or a second user A projection image model, a first user A interaction model and/or a second user A interaction model selected by the user A in the second physical space and/or the third physical space;
and establishing connection between the user A and the second physical space and/or the third physical space for the user A to control and interact.
It may be appreciated that, in order to better meet the needs of the user, a more intelligent user experience is brought, in this embodiment, the user a selects, on the first intelligent terminal, a first user a avatar model and/or a second user a avatar model of the user a avatar model in the second physical space and/or the third physical space (the avatar model is obtained by training the neural network according to big data by the first server, including but not limited to an image style, a avatar size ratio, a avatar gender, an age characteristic, a projection prop, etc.), a first user a interaction model and/or a second user a interaction model (the interaction model is generated by the first server, including but not limited to an interaction mode, an interaction prop, an interaction object, an interaction time, an interaction behavior, etc.); and the user A establishes a control relation with the second intelligent terminal or the third intelligent terminal through the first intelligent terminal so as to establish communication connection between the second physical space and/or the articles in the third physical space (the articles in each physical space are in communication connection with the first/second/third intelligent terminals and receive control and interaction instructions) and establish connection for control and interaction. Similarly, the user B selects a first user B projection image model and/or a second user B projection image model, a first user B interaction model and/or a second user B interaction model which are/is in the first physical space and/or the third physical space on the second intelligent terminal; and the user B establishes connection with the first physical space and/or the third physical space through the second intelligent terminal so as to control and interact. The user C selects a first user C projection image model and/or a second user C projection image model, a first user C interaction model and/or a second user C interaction model which are/is in the first physical space and/or the second physical space on the third intelligent terminal; and the user C establishes connection with the first physical space and/or the articles in the second physical space through the third intelligent terminal so as to control and interact.
It should be noted that all embodiments of the present invention may be adapted to a variety of scenarios, for example, the user A, B, C may respectively perform a conference or perform other activities in three intelligent conference rooms a, B, C, and the user a may send a control and interaction instruction to the second intelligent terminal or the third intelligent terminal through the first intelligent terminal, where the second intelligent terminal or the third intelligent terminal sends the control and interaction instruction to an article (such as a display device, an audio device, etc.) in the conference room B or C to assist the user a in interacting with the user B or C. In addition, the application scene can also be an intelligent chat room, an intelligent restaurant, an intelligent bookstore, an intelligent self-study room and the like.
In some possible embodiments of the present invention, the first intelligent terminal is configured to:
and generating a corresponding first virtual object image and a first virtual effect image according to the first user A interaction data among the user A, the user B and the user C, and projecting the first virtual object image and the first virtual effect image in the first physical space.
It may be appreciated that, in order to create a more suitable scenario, in this embodiment, the first intelligent terminal generates a corresponding first virtual object image and a first virtual effect image according to the first user a interaction data (and the first user B interaction data and the first user C interaction data obtained from the second intelligent terminal and the third intelligent terminal) between the user a, the user B and the user C, and projects the first virtual object image and the first virtual effect image in the first physical space in combination with the first physical space feature data, and controls a 3D printing device disposed in the first physical space to print a first object corresponding to the first virtual object image or a second object in the second physical space/third physical space or print a third object matched with the user B/user C. Similarly, in the second physical space and the third physical space, the second intelligent terminal and the third intelligent terminal may execute the corresponding operations, so as to create a scene that is more matched with the interaction behaviors among the user a, the user B and the user C in the second physical space and the third physical space.
In some possible embodiments of the invention, the first server is further configured to:
according to different scenes, respectively carrying out subspace division on the first physical space, the second physical space and the third physical space based on the first three-dimensional space, the second three-dimensional space and the third three-dimensional space, and establishing subspace-scene correspondence between each subspace and each scene;
determining a first relation among the user A, the user B and the user C according to the first user A interaction data, the first user B interaction data and the first user C interaction data;
determining a first interaction scene among the user A, the user B and the user C according to the first relation;
determining corresponding first subspaces, second subspaces and third subspaces in the first physical space, the second physical space and the third physical space according to the first interaction scene and the subspace-scene corresponding relation;
and controlling the first intelligent terminal, the second intelligent terminal and the third intelligent terminal to project the first user B virtual image and the first user C virtual image, the first user A virtual image and the second user C virtual image, and the second user A virtual image and the second user B virtual image in the first subspace, the second subspace and the third subspace respectively.
It can be understood that, in this embodiment, in order to make a space construct more fit to the user features, the first server performs subspace division on the first physical space, the second physical space and the third physical space based on the first three-dimensional space, the second three-dimensional space and the third three-dimensional space according to different scenes, and establishes subspace-scene correspondence between each subspace and each scene; the first server determines a first relation among the user A, the user B and the user C according to the first user A interaction data, the first user B interaction data and the first user C interaction data; determining a first interaction scene among the user A, the user B and the user C according to the first relation; determining corresponding first subspaces, second subspaces and third subspaces in the first physical space, the second physical space and the third physical space according to the first interaction scene and the subspace-scene corresponding relation; and respectively projecting the first user B virtual image and the first user C virtual image, the first user A virtual image and the second user C virtual image, and the second user A virtual image and the second user B virtual image in the first subspace, the second subspace and the third subspace.
Referring to fig. 2, another embodiment of the present invention provides a multimedia presentation method based on an augmented reality technology, which is applied to a multimedia presentation system based on an augmented reality technology, wherein the multimedia presentation system based on an augmented reality technology includes a first intelligent terminal, a second intelligent terminal, a third intelligent terminal and a first server; the multimedia presentation method based on the augmented reality technology comprises the following steps:
communication connection among a user A, a user B and a user C is respectively established through the first intelligent terminal, the second intelligent terminal and the third intelligent terminal;
the first server acquires first three-dimensional data of a first physical space where the user A is located, and models the first three-dimensional data to obtain a first three-dimensional space;
the first server acquires second three-dimensional data of a second physical space where the user B is located, and models the second three-dimensional data to obtain a second three-dimensional space;
the first server obtains third three-dimensional data of a third physical space where the user C is located, and models the third three-dimensional data to obtain a third three-dimensional space;
the first intelligent terminal projects a first user B virtual image of the user B and a first user C virtual image of the user C in the first physical space, and receives first user A interaction data of the user A, the first user B virtual image and the first user C virtual image;
The second intelligent terminal projects a first user A virtual image of the user A and a second user C virtual image of the user C in the second physical space, and receives first user B interaction data of the user B, the first user A virtual image and the second user C virtual image;
and the third intelligent terminal projects a second user A virtual image of the user A and a second user B virtual image of the user B in the third physical space, and receives interaction data of the user C and a first user C of the second user A virtual image and the second user B virtual image.
It may be appreciated that in the embodiment of the present invention, the first intelligent terminal, the second intelligent terminal, and the third intelligent terminal may be intelligent terminals having functional modules such as a communication module, a control processing module, a projection module, and a sensing module. The user A establishes communication connection through the first intelligent terminal, the user B establishes communication connection through the second intelligent terminal and the user C establishes communication connection through the third intelligent terminal respectively; the first server acquires first three-dimensional data (such as three-dimensional point cloud data) of a first physical space where the user A is located, and models according to the first three-dimensional data to obtain a first three-dimensional space; acquiring second three-dimensional data of a second physical space where the user B is located, and modeling according to the second three-dimensional data to obtain a second three-dimensional space; and acquiring third three-dimensional data of a third physical space where the user C is located, and modeling according to the third three-dimensional data to obtain a third three-dimensional space.
The first intelligent terminal receives user B related data sent by the second intelligent terminal and user C related data sent by the third intelligent terminal, projects a first user B virtual image of the user B and a first user C virtual image of the user C in the first physical space according to the user B related data and the user C related data, and receives first user A interaction data of the user A, the first user B virtual image and the first user C virtual image; in order to enable the virtual image of the B/C in the first physical space to be matched with the characteristics of the user A, further, the first user B virtual image of the user B and the first user C virtual image of the user C are projected in the first physical space according to the user A characteristic data, the user B characteristic data and the user C characteristic data.
The second intelligent terminal receives user A related data sent by the first intelligent terminal and user C related data sent by the third intelligent terminal, projects a first user A virtual image of the user A and a second user C virtual image of the user C in the second physical space according to the user A related data and the user C related data, and receives first user B interaction data of the user B, the first user A virtual image and the second user C virtual image; in order to enable the virtual image of the A/C in the second physical space to be matched with the characteristics of the user B, further, a first user A virtual image of the user A and a second user C virtual image of the user C are projected in the second physical space according to the user A characteristic data, the user B characteristic data and the user C characteristic data.
The third intelligent terminal receives user A related data sent by the first intelligent terminal and user B related data sent by the second intelligent terminal, projects a second user A virtual image of the user A and a second user B virtual image of the user B in the third physical space according to the user A related data and the user B related data, and receives first user C interaction data of the user C, the second user A virtual image and the second user B virtual image; in order to enable the virtual image of the A/B in the third physical space to be matched with the characteristic of the user C, further, a second user A virtual image of the user A and a second user B virtual image of the user B are projected in the third physical space according to the characteristic data of the user A, the characteristic data of the user B and the characteristic data of the user C.
Wherein, the user A characteristic data/user B characteristic data/user C characteristic data comprises but is not limited to character image data, character voice data, character action data, character data, character basic physiological data and the like of the user A/B/C. The virtual image generation process of the user A specifically comprises the following steps: firstly, generating an A three-dimensional virtual image of a user A according to image data of the character of the user A; then, modifying the A three-dimensional virtual image according to the association relation or matching requirement between the characteristic data of the user A and/or the user B and/or the user C; and then, modifying the modified A three-dimensional virtual image again (for example, respectively projecting the modified A three-dimensional virtual image into the second physical space and the third physical space according to the characteristic data (including but not limited to the second physical space and/or the third physical space), so as to enable the virtual image of the end user A to be matched with the physical space (for example, the second physical space and/or the third physical space). Similarly, the principle of the virtual image generation flow of the user B/C is the same as that of the user A.
According to the scheme, the augmented reality technology is utilized, so that an immersive interactive environment can be created for a plurality of users in different places, personalized setting can be performed according to the characteristics of the users and the characteristics of physical space, and user experience is greatly improved.
In some possible embodiments of the present invention, the step of projecting, by the first smart terminal, a first user B virtual image of the user B and a first user C virtual image of the user C in the first physical space includes:
the first intelligent terminal obtains first physical space feature data of the first physical space from the first three-dimensional space, and obtains first user A feature data of the user A, first user B feature data of the user B and first user C feature data of the user C;
generating the first user B virtual image and the first user C virtual image according to the first physical space characteristic data, the first user A characteristic data, the first user B characteristic data and the first user C characteristic data respectively;
and projecting the first user B virtual image and the first user C virtual image in the first physical space.
It can be understood that, in order to make the virtual images of the users completely fused with the projected physical space and the users in the physical space to reach harmony, in this embodiment, the first intelligent terminal obtains first physical space feature data (including, but not limited to, functions, structures, decorations, scenes, object placement states, etc. of the physical space) of the first physical space from the first three-dimensional space, and obtains first user a feature data of the user a, first user B feature data of the user B, and first user C feature data of the user C; respectively generating the first user B virtual image and the first user C virtual image according to the matching relationship and the incidence relationship among the first physical space feature data, the first user A feature data, the first user B feature data and the first user C feature data; and projecting the first user B virtual image and the first user C virtual image in the first physical space. Similarly, the principle of the second intelligent terminal projecting the first user a virtual image of the user a and the second user C virtual image of the user C in the second physical space, and the third intelligent terminal projecting the second user a virtual image of the user a and the second user B virtual image of the user B in the third physical space are the same as the principle of the first intelligent terminal projecting the first user B virtual image of the user B and the first user C virtual image of the user C in the first physical space, which are not described herein again.
Further, in order to increase the interestingness of the physical space, the first/second/third intelligent terminal may further manage and reconstruct the first/second/third three-dimensional space corresponding to the first/second/third physical space according to the feature data of the user a/user B/user C, and then project the increased virtual image in the first/second/third physical space.
In some possible embodiments of the present invention, the method further comprises:
the user A selects a first user A projection image model and/or a second user A projection image model, a first user A interaction model and/or a second user A interaction model which are/is in the second physical space and/or the third physical space on the first intelligent terminal;
and the user A establishes connection with the second physical space and/or the third physical space through the first intelligent terminal so as to control and interact.
It may be appreciated that, in order to better meet the needs of the user, a more intelligent user experience is brought, in this embodiment, the user a selects, on the first intelligent terminal, a first user a avatar model and/or a second user a avatar model of the user a avatar model in the second physical space and/or the third physical space (the avatar model is obtained by training the neural network according to big data by the first server, including but not limited to an image style, a avatar size ratio, a avatar gender, an age characteristic, a projection prop, etc.), a first user a interaction model and/or a second user a interaction model (the interaction model is generated by the first server, including but not limited to an interaction mode, an interaction prop, an interaction object, an interaction time, an interaction behavior, etc.); and the user A establishes a control relation with the second intelligent terminal or the third intelligent terminal through the first intelligent terminal so as to establish communication connection between the second physical space and/or the articles in the third physical space (the articles in each physical space are in communication connection with the first/second/third intelligent terminals and receive control and interaction instructions) and establish connection for control and interaction. Similarly, the user B selects a first user B projection image model and/or a second user B projection image model, a first user B interaction model and/or a second user B interaction model which are/is in the first physical space and/or the third physical space on the second intelligent terminal; and the user B establishes connection with the first physical space and/or the third physical space through the second intelligent terminal so as to control and interact. The user C selects a first user C projection image model and/or a second user C projection image model, a first user C interaction model and/or a second user C interaction model which are/is in the first physical space and/or the second physical space on the third intelligent terminal; and the user C establishes connection with the first physical space and/or the articles in the second physical space through the third intelligent terminal so as to control and interact.
It should be noted that all embodiments of the present invention may be adapted to a variety of scenarios, for example, the user A, B, C may respectively perform a conference or perform other activities in three intelligent conference rooms a, B, C, and the user a may send a control and interaction instruction to the second intelligent terminal or the third intelligent terminal through the first intelligent terminal, where the second intelligent terminal or the third intelligent terminal sends the control and interaction instruction to an article (such as a display device, an audio device, etc.) in the conference room B or C to assist the user a in interacting with the user B or C. In addition, the application scene can also be an intelligent chat room, an intelligent restaurant, an intelligent bookstore, an intelligent self-study room and the like.
In some possible embodiments of the present invention, the method further comprises:
the first intelligent terminal generates a corresponding first virtual object image and a first virtual effect image according to the first user A interaction data among the user A, the user B and the user C, and projects the first virtual object image and the first virtual effect image in the first physical space.
It may be appreciated that, in order to create a more suitable scenario, in this embodiment, the first intelligent terminal generates a corresponding first virtual object image and a first virtual effect image according to the first user a interaction data (and the first user B interaction data and the first user C interaction data obtained from the second intelligent terminal and the third intelligent terminal) between the user a, the user B and the user C, and projects the first virtual object image and the first virtual effect image in the first physical space in combination with the first physical space feature data, and controls a 3D printing device disposed in the first physical space to print a first object corresponding to the first virtual object image or a second object in the second physical space/third physical space or print a third object matched with the user B/user C. Similarly, in the second physical space and the third physical space, the second intelligent terminal and the third intelligent terminal may execute the corresponding operations, so as to create a scene that is more matched with the interaction behaviors among the user a, the user B and the user C in the second physical space and the third physical space.
In some possible embodiments of the present invention, the method further comprises:
the first server respectively performs subspace division on the first physical space, the second physical space and the third physical space based on the first three-dimensional space, the second three-dimensional space and the third three-dimensional space according to different scenes, and establishes subspace-scene correspondence between each subspace and each scene;
the first server determines a first relation among the user A, the user B and the user C according to the first user A interaction data, the first user B interaction data and the first user C interaction data;
determining a first interaction scene among the user A, the user B and the user C according to the first relation;
determining corresponding first subspaces, second subspaces and third subspaces in the first physical space, the second physical space and the third physical space according to the first interaction scene and the subspace-scene corresponding relation;
and respectively projecting the first user B virtual image and the first user C virtual image, the first user A virtual image and the second user C virtual image, and the second user A virtual image and the second user B virtual image in the first subspace, the second subspace and the third subspace.
It can be understood that, in this embodiment, in order to make a space construct more fit to the user features, the first server performs subspace division on the first physical space, the second physical space and the third physical space based on the first three-dimensional space, the second three-dimensional space and the third three-dimensional space according to different scenes, and establishes subspace-scene correspondence between each subspace and each scene; the first server determines a first relation among the user A, the user B and the user C according to the first user A interaction data, the first user B interaction data and the first user C interaction data; determining a first interaction scene among the user A, the user B and the user C according to the first relation; determining corresponding first subspaces, second subspaces and third subspaces in the first physical space, the second physical space and the third physical space according to the first interaction scene and the subspace-scene corresponding relation; and respectively projecting the first user B virtual image and the first user C virtual image, the first user A virtual image and the second user C virtual image, and the second user A virtual image and the second user B virtual image in the first subspace, the second subspace and the third subspace.
It should be noted that, for simplicity of description, the foregoing method embodiments are all expressed as a series of action combinations, but it should be understood by those skilled in the art that the present application is not limited by the order of actions described, as some steps may be performed in other order or simultaneously in accordance with the present application. Further, those skilled in the art will also appreciate that the embodiments described in the specification are all preferred embodiments, and that the acts and modules referred to are not necessarily required in the present application.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to related descriptions of other embodiments.
In the several embodiments provided in this application, it should be understood that the disclosed apparatus may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, such as the above-described division of units, merely a division of logic functions, and there may be additional manners of dividing in actual implementation, such as multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, or may be in electrical or other forms.
The units described above as separate components may or may not be physically separate, and components shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units described above, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable memory. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a memory, including several instructions for causing a computer device (which may be a personal computer, a server or a network device, etc.) to perform all or part of the steps of the above-mentioned method of the various embodiments of the present application. And the aforementioned memory includes: a U-disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Those of ordinary skill in the art will appreciate that all or a portion of the steps in the various methods of the above embodiments may be implemented by a program that instructs associated hardware, and the program may be stored in a computer readable memory, which may include: flash disk, read-Only Memory (ROM), random access Memory (Random Access Memory, RAM), magnetic disk or optical disk.
The foregoing has outlined rather broadly the more detailed description of embodiments of the present application, wherein specific examples are provided herein to illustrate the principles and embodiments of the present application, the above examples being provided solely to assist in the understanding of the methods of the present application and the core ideas thereof; meanwhile, as those skilled in the art will have modifications in the specific embodiments and application scope in accordance with the ideas of the present application, the present description should not be construed as limiting the present application in view of the above.
Although the present invention is disclosed above, the present invention is not limited thereto. Variations and modifications, including combinations of the different functions and implementation steps, as well as embodiments of the software and hardware, may be readily apparent to those skilled in the art without departing from the spirit and scope of the invention.

Claims (8)

1. A multimedia presentation system based on augmented reality technology, comprising: the system comprises a first intelligent terminal, a second intelligent terminal, a third intelligent terminal and a first server; wherein,
the first server is configured to:
communication connection among a user A, a user B and a user C is respectively established through the first intelligent terminal, the second intelligent terminal and the third intelligent terminal;
acquiring first three-dimensional data of a first physical space where the user A is located, and modeling to obtain a first three-dimensional space;
acquiring second three-dimensional data of a second physical space where the user B is located, and modeling to obtain a second three-dimensional space;
acquiring third three-dimensional data of a third physical space where the user C is located, and modeling to obtain a third three-dimensional space;
the first intelligent terminal is configured to: projecting a first user B virtual image of the user B and a first user C virtual image of the user C in the first physical space, and receiving first user A interaction data of the user A, the first user B virtual image and the first user C virtual image;
the second intelligent terminal is configured to: projecting a first user A virtual image of the user A and a second user C virtual image of the user C in the second physical space, and receiving first user B interaction data of the user B, the first user A virtual image and the second user C virtual image;
The third intelligent terminal is configured to: projecting a second user A virtual image of the user A and a second user B virtual image of the user B in the third physical space, and receiving first user C interaction data of the user C, the second user A virtual image and the second user B virtual image;
the first server is further configured to:
according to different scenes, respectively carrying out subspace division on the first physical space, the second physical space and the third physical space based on the first three-dimensional space, the second three-dimensional space and the third three-dimensional space, and establishing subspace-scene correspondence between each subspace and each scene;
determining a first relation among the user A, the user B and the user C according to the first user A interaction data, the first user B interaction data and the first user C interaction data;
determining a first interaction scene among the user A, the user B and the user C according to the first relation;
determining corresponding first subspaces, second subspaces and third subspaces in the first physical space, the second physical space and the third physical space according to the first interaction scene and the subspace-scene corresponding relation;
And controlling the first intelligent terminal, the second intelligent terminal and the third intelligent terminal to project the first user B virtual image and the first user C virtual image, the first user A virtual image and the second user C virtual image, and the second user A virtual image and the second user B virtual image in the first subspace, the second subspace and the third subspace respectively.
2. The augmented reality-based multimedia presentation system of claim 1, wherein in the step of projecting the first user B virtual image of the user B and the first user C virtual image of the user C within the first physical space, the first intelligent terminal is specifically configured to:
obtaining first physical space characteristic data of the first physical space from the first three-dimensional space, and obtaining first user A characteristic data of the user A, first user B characteristic data of the user B and first user C characteristic data of the user C;
generating the first user B virtual image and the first user C virtual image according to the first physical space characteristic data, the first user A characteristic data, the first user B characteristic data and the first user C characteristic data respectively;
And projecting the first user B virtual image and the first user C virtual image in the first physical space.
3. The augmented reality-based multimedia presentation system of claim 2, wherein the first intelligent terminal is configured to:
receiving a first user A projection image model and/or a second user A projection image model, a first user A interaction model and/or a second user A interaction model selected by the user A in the second physical space and/or the third physical space;
and establishing connection between the user A and the second physical space and/or the third physical space for the user A to control and interact.
4. The augmented reality-based multimedia presentation system of claim 3, wherein the first intelligent terminal is configured to:
and generating a corresponding first virtual object image and a first virtual effect image according to the first user A interaction data among the user A, the user B and the user C, and projecting the first virtual object image and the first virtual effect image in the first physical space.
5. The multimedia display method based on the augmented reality technology is characterized by being applied to a multimedia display system based on the augmented reality technology, wherein the multimedia display system based on the augmented reality technology comprises a first intelligent terminal, a second intelligent terminal, a third intelligent terminal and a first server; the multimedia presentation method based on the augmented reality technology comprises the following steps:
Communication connection among a user A, a user B and a user C is respectively established through the first intelligent terminal, the second intelligent terminal and the third intelligent terminal;
the first server acquires first three-dimensional data of a first physical space where the user A is located, and models the first three-dimensional data to obtain a first three-dimensional space;
the first server acquires second three-dimensional data of a second physical space where the user B is located, and models the second three-dimensional data to obtain a second three-dimensional space;
the first server obtains third three-dimensional data of a third physical space where the user C is located, and models the third three-dimensional data to obtain a third three-dimensional space;
the first intelligent terminal projects a first user B virtual image of the user B and a first user C virtual image of the user C in the first physical space, and receives first user A interaction data of the user A, the first user B virtual image and the first user C virtual image;
the second intelligent terminal projects a first user A virtual image of the user A and a second user C virtual image of the user C in the second physical space, and receives first user B interaction data of the user B, the first user A virtual image and the second user C virtual image;
The third intelligent terminal projects a second user A virtual image of the user A and a second user B virtual image of the user B in the third physical space, and receives interaction data of the user C and first user C of the second user A virtual image and the second user B virtual image;
the first server respectively performs subspace division on the first physical space, the second physical space and the third physical space based on the first three-dimensional space, the second three-dimensional space and the third three-dimensional space according to different scenes, and establishes subspace-scene correspondence between each subspace and each scene;
the first server determines a first relation among the user A, the user B and the user C according to the first user A interaction data, the first user B interaction data and the first user C interaction data;
determining a first interaction scene among the user A, the user B and the user C according to the first relation;
determining corresponding first subspaces, second subspaces and third subspaces in the first physical space, the second physical space and the third physical space according to the first interaction scene and the subspace-scene corresponding relation;
And respectively projecting the first user B virtual image and the first user C virtual image, the first user A virtual image and the second user C virtual image, and the second user A virtual image and the second user B virtual image in the first subspace, the second subspace and the third subspace.
6. The method according to claim 5, wherein the step of projecting the first user B virtual image of the user B and the first user C virtual image of the user C in the first physical space by the first intelligent terminal comprises:
the first intelligent terminal obtains first physical space feature data of the first physical space from the first three-dimensional space, and obtains first user A feature data of the user A, first user B feature data of the user B and first user C feature data of the user C;
generating the first user B virtual image and the first user C virtual image according to the first physical space characteristic data, the first user A characteristic data, the first user B characteristic data and the first user C characteristic data respectively;
and projecting the first user B virtual image and the first user C virtual image in the first physical space.
7. The augmented reality-based multimedia presentation method of claim 6, further comprising:
the user A selects a first user A projection image model and/or a second user A projection image model, a first user A interaction model and/or a second user A interaction model which are/is in the second physical space and/or the third physical space on the first intelligent terminal;
and the user A establishes connection with the second physical space and/or the third physical space through the first intelligent terminal so as to control and interact.
8. The augmented reality-based multimedia presentation method of claim 7, further comprising:
the first intelligent terminal generates a corresponding first virtual object image and a first virtual effect image according to the first user A interaction data among the user A, the user B and the user C, and projects the first virtual object image and the first virtual effect image in the first physical space.
CN202310665058.7A 2023-06-06 2023-06-06 Multimedia display system and method based on augmented reality technology Active CN116664805B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310665058.7A CN116664805B (en) 2023-06-06 2023-06-06 Multimedia display system and method based on augmented reality technology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310665058.7A CN116664805B (en) 2023-06-06 2023-06-06 Multimedia display system and method based on augmented reality technology

Publications (2)

Publication Number Publication Date
CN116664805A CN116664805A (en) 2023-08-29
CN116664805B true CN116664805B (en) 2024-02-06

Family

ID=87711508

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310665058.7A Active CN116664805B (en) 2023-06-06 2023-06-06 Multimedia display system and method based on augmented reality technology

Country Status (1)

Country Link
CN (1) CN116664805B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103777915A (en) * 2014-01-30 2014-05-07 中国科学院计算技术研究所 Immersed type interaction system
KR20170142086A (en) * 2016-06-16 2017-12-27 주식회사 에이치투앤컴퍼니 Interaction-type double reality system by combining VR content and AR content and method thereof
CN108234276A (en) * 2016-12-15 2018-06-29 腾讯科技(深圳)有限公司 Interactive method, terminal and system between a kind of virtual image
CN108833931A (en) * 2018-06-28 2018-11-16 南京曼殊室信息科技有限公司 A kind of strange land 3D hologram interaction live streaming platform
US11200742B1 (en) * 2020-02-28 2021-12-14 United Services Automobile Association (Usaa) Augmented reality-based interactive customer support
KR20220125536A (en) * 2021-03-05 2022-09-14 주식회사 맘모식스 System and operating method for providing mutual interaction service between virtual reality users and augmented reality users
CN115581350A (en) * 2022-09-13 2023-01-10 七造(重庆)科技有限公司 Immersive interactive system

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10699461B2 (en) * 2016-12-20 2020-06-30 Sony Interactive Entertainment LLC Telepresence of multiple users in interactive virtual space
EP3495921A1 (en) * 2017-12-11 2019-06-12 Nokia Technologies Oy An apparatus and associated methods for presentation of first and second virtual-or-augmented reality content
US11580734B1 (en) * 2021-07-26 2023-02-14 At&T Intellectual Property I, L.P. Distinguishing real from virtual objects in immersive reality

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103777915A (en) * 2014-01-30 2014-05-07 中国科学院计算技术研究所 Immersed type interaction system
KR20170142086A (en) * 2016-06-16 2017-12-27 주식회사 에이치투앤컴퍼니 Interaction-type double reality system by combining VR content and AR content and method thereof
CN108234276A (en) * 2016-12-15 2018-06-29 腾讯科技(深圳)有限公司 Interactive method, terminal and system between a kind of virtual image
CN108833931A (en) * 2018-06-28 2018-11-16 南京曼殊室信息科技有限公司 A kind of strange land 3D hologram interaction live streaming platform
US11200742B1 (en) * 2020-02-28 2021-12-14 United Services Automobile Association (Usaa) Augmented reality-based interactive customer support
KR20220125536A (en) * 2021-03-05 2022-09-14 주식회사 맘모식스 System and operating method for providing mutual interaction service between virtual reality users and augmented reality users
CN115581350A (en) * 2022-09-13 2023-01-10 七造(重庆)科技有限公司 Immersive interactive system

Also Published As

Publication number Publication date
CN116664805A (en) 2023-08-29

Similar Documents

Publication Publication Date Title
US10839023B2 (en) Avatar service system and method for animating avatar on a terminal on a network
US10210002B2 (en) Method and apparatus of processing expression information in instant communication
US20210191690A1 (en) Virtual Reality Device Control Method And Apparatus, And Virtual Reality Device And System
CN113240782B (en) Streaming media generation method and device based on virtual roles
US9047710B2 (en) System and method for providing an avatar service in a mobile environment
KR102491140B1 (en) Method and apparatus for generating virtual avatar
CN109885367B (en) Interactive chat implementation method, device, terminal and storage medium
TW202123178A (en) Method for realizing lens splitting effect, device and related products thereof
CN110488973B (en) Virtual interactive message leaving system and method
CN110545442A (en) live broadcast interaction method and device, electronic equipment and readable storage medium
CN106302666B (en) Data push method and device
Roberts et al. withyou—an experimental end-to-end telepresence system using video-based reconstruction
US20230290043A1 (en) Picture generation method and apparatus, device, and medium
US20230245385A1 (en) Interactive method and apparatus based on virtual scene, device, and medium
CN114222076B (en) Face changing video generation method, device, equipment and storage medium
CN114237540A (en) Intelligent classroom online teaching interaction method and device, storage medium and terminal
CN111464859B (en) Method and device for online video display, computer equipment and storage medium
CN105956430A (en) Method and apparatus for automatically logging in VR platform
CN116664805B (en) Multimedia display system and method based on augmented reality technology
CN113411537A (en) Video call method, device, terminal and storage medium
CN112364478A (en) Virtual reality-based testing method and related device
KR102079321B1 (en) System and method for avatar service through cable and wireless web
US11673059B2 (en) Automatic presentation of suitable content
US20230059361A1 (en) Cross-franchise object substitutions for immersive media
CN107832366A (en) Video sharing method and device, terminal installation and computer-readable recording medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant