CN116664806A - Method, device and medium for presenting augmented reality data - Google Patents
Method, device and medium for presenting augmented reality data Download PDFInfo
- Publication number
- CN116664806A CN116664806A CN202310674233.9A CN202310674233A CN116664806A CN 116664806 A CN116664806 A CN 116664806A CN 202310674233 A CN202310674233 A CN 202310674233A CN 116664806 A CN116664806 A CN 116664806A
- Authority
- CN
- China
- Prior art keywords
- augmented reality
- data
- information
- user
- anchor point
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000003190 augmentative effect Effects 0.000 title claims abstract description 1026
- 238000000034 method Methods 0.000 title claims abstract description 142
- 230000004044 response Effects 0.000 claims description 60
- 230000015654 memory Effects 0.000 claims description 43
- 238000004590 computer program Methods 0.000 claims description 17
- 210000000988 bone and bone Anatomy 0.000 claims description 3
- 230000000007 visual effect Effects 0.000 description 13
- 238000004891 communication Methods 0.000 description 11
- 230000008569 process Effects 0.000 description 9
- 238000010586 diagram Methods 0.000 description 8
- 238000013507 mapping Methods 0.000 description 8
- 239000000463 material Substances 0.000 description 8
- 230000001133 acceleration Effects 0.000 description 6
- 238000005516 engineering process Methods 0.000 description 6
- 230000006870 function Effects 0.000 description 6
- 239000011521 glass Substances 0.000 description 6
- 230000005291 magnetic effect Effects 0.000 description 6
- 238000013519 translation Methods 0.000 description 6
- 230000009471 action Effects 0.000 description 5
- 230000005540 biological transmission Effects 0.000 description 5
- 238000004422 calculation algorithm Methods 0.000 description 5
- 230000003287 optical effect Effects 0.000 description 5
- 238000012545 processing Methods 0.000 description 5
- 238000009877 rendering Methods 0.000 description 5
- 230000009466 transformation Effects 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 3
- 238000011161 development Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 230000005294 ferromagnetic effect Effects 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
- 238000012800 visualization Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Graphics (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Processing Or Creating Images (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
It is an object of the present application to provide a method, device and medium for presenting augmented reality data, the method comprising: in a multi-user video scene of a first user and a second user, positioning a first video stream corresponding to the first user based on a target anchor point, determining first real-time pose information of the first user equipment in each current video frame of the first video stream relative to the target anchor point, and providing the first real-time pose information to the second user equipment; providing first augmented reality data to a second user device to superimpose and present the at least one augmented reality presentation information on the first video stream according to the first augmented reality data and the first real-time pose information on the second user device.
Description
Technical Field
The present application relates to the field of communications, and in particular to a technique for presenting augmented reality data.
Background
In recent years, the scientific technology has rapidly developed, and the AR (Augmented Reality ) technology has matured and gradually moved into the field of vision of people. In the prior art, a plurality of users can only send the live-action environment where the users A are currently located to the users B in a real-time video stream mode under the scene of video call or video conference, but the users B cannot see AR content added at the appointed position of the users A in the live-action environment.
Disclosure of Invention
It is an object of the present application to provide a method, apparatus and medium for presenting augmented reality data.
According to one aspect of the present application, there is provided a method for presenting augmented reality data, the method comprising:
in a multi-user video scene of a first user and a second user, positioning a first video stream corresponding to the first user based on a target anchor point, determining first real-time pose information of the first user equipment in each current video frame of the first video stream relative to the target anchor point, and providing the first real-time pose information to the second user equipment;
providing first augmented reality data to a second user device, so that the at least one piece of augmented reality presentation information is presented on the first video stream in a superposition mode on the second user device according to the first augmented reality data and the first real-time pose information, wherein the first augmented reality data comprises scene data information corresponding to at least one scene associated with the target anchor point and object data information corresponding to at least one piece of augmented reality presentation information located in the at least one scene.
According to another aspect of the present application, there is provided a method for presenting augmented reality data, the method comprising:
acquiring first real-time pose information, provided by first user equipment, of a target anchor point in each current video frame of a first video stream corresponding to a first user, of the first user equipment in a multi-user video scene of the first user and a second user;
in response to obtaining first augmented reality data provided by the first user equipment, determining presentation position information of at least one piece of augmented reality presentation information on a current video frame of the first video stream according to the first augmented reality data and the first real-time pose information, wherein the augmented reality data comprises scene data information corresponding to at least one scene associated with the target anchor point and object data information corresponding to at least one piece of augmented reality presentation information located in the at least one scene;
and superposing and presenting the at least one augmented reality presentation information on the current video frame of the first video stream according to the presentation position information.
According to another aspect of the present application, there is provided a method for presenting augmented reality data, the method comprising:
Acquiring first augmented reality data provided by a first user device in a multi-person video scene of a first user and a second user, wherein a third user is not in the multi-person video scene, and the first augmented reality data comprises anchor point data information corresponding to a target anchor point, scene data information corresponding to at least one scene associated with the target anchor point and object data information corresponding to at least one augmented reality presentation information located in the at least one scene;
and identifying the target anchor point in the camera live-action picture according to the anchor point data information, and superposing and presenting the at least one piece of augmented reality presentation information on the camera live-action picture based on the real-time pose information of the third user equipment relative to the target anchor point and the first augmented reality data.
According to another aspect of the present application, there is provided a method for presenting augmented reality data, the method comprising:
in a multi-user video scene of a first user and a second user, positioning a first video stream corresponding to the first user based on a target anchor point, determining first real-time pose information of the first user equipment relative to the target anchor point in each current video frame of the first video stream, and providing the first real-time pose information to the second user equipment so that the second user adds at least one piece of augmented reality presentation information on the first video stream, and enabling the second user equipment to superimpose and present the at least one piece of augmented reality presentation information on the first video stream according to the first real-time pose information;
Acquiring first augmented reality data provided by the second user equipment, wherein the first augmented reality data comprises object data information corresponding to the at least one piece of augmented reality presentation information;
and superposing and presenting the at least one piece of augmented reality presentation information on the first video stream according to the first augmented reality data and the first real-time pose information.
According to another aspect of the present application, there is provided a method for presenting augmented reality data, the method comprising:
acquiring first real-time pose information, provided by first user equipment, of a target anchor point in each current video frame of a first video stream corresponding to a first user, of the first user equipment in a multi-user video scene of the first user and a second user;
obtaining first augmented reality data corresponding to at least one piece of augmented reality presentation information added by the second user on the first video stream, wherein the first augmented reality data comprises object data information corresponding to the at least one piece of augmented reality presentation information, and the object data information comprises first position information of the at least one piece of augmented reality presentation information relative to the target anchor point;
Determining presentation position information of the at least one piece of augmented reality presentation information on a current video frame of the first video stream according to the first augmented reality data and the first real-time pose information;
and superposing and presenting the at least one augmented reality presentation information on the current video frame of the first video stream according to the presentation position information.
According to one aspect of the present application there is provided a computer device for presenting augmented reality data, the device comprising:
a processor; and
a memory arranged to store computer executable instructions that, when executed, cause the processor to:
in a multi-user video scene of a first user and a second user, positioning a first video stream corresponding to the first user based on a target anchor point, determining first real-time pose information of the first user equipment in each current video frame of the first video stream relative to the target anchor point, and providing the first real-time pose information to the second user equipment;
providing first augmented reality data to a second user device, so that the at least one piece of augmented reality presentation information is presented on the first video stream in a superposition mode on the second user device according to the first augmented reality data and the first real-time pose information, wherein the first augmented reality data comprises scene data information corresponding to at least one scene associated with the target anchor point and object data information corresponding to at least one piece of augmented reality presentation information located in the at least one scene.
According to another aspect of the present application, there is provided a computer device for presenting augmented reality data, the device comprising:
a processor; and
a memory arranged to store computer executable instructions that, when executed, cause the processor to:
acquiring first real-time pose information, provided by first user equipment, of a target anchor point in each current video frame of a first video stream corresponding to a first user, of the first user equipment in a multi-user video scene of the first user and a second user;
in response to obtaining first augmented reality data provided by the first user equipment, determining presentation position information of at least one piece of augmented reality presentation information on a current video frame of the first video stream according to the first augmented reality data and the first real-time pose information, wherein the augmented reality data comprises scene data information corresponding to at least one scene associated with the target anchor point and object data information corresponding to at least one piece of augmented reality presentation information located in the at least one scene;
and superposing and presenting the at least one augmented reality presentation information on the current video frame of the first video stream according to the presentation position information.
According to another aspect of the present application, there is provided a computer device for presenting augmented reality data, the device comprising:
a processor; and
a memory arranged to store computer executable instructions that, when executed, cause the processor to:
acquiring first augmented reality data provided by a first user device in a multi-person video scene of a first user and a second user, wherein a third user is not in the multi-person video scene, and the first augmented reality data comprises anchor point data information corresponding to a target anchor point, scene data information corresponding to at least one scene associated with the target anchor point and object data information corresponding to at least one augmented reality presentation information located in the at least one scene;
and identifying the target anchor point in the camera live-action picture according to the anchor point data information, and superposing and presenting the at least one piece of augmented reality presentation information on the camera live-action picture based on the real-time pose information of the third user equipment relative to the target anchor point and the first augmented reality data.
According to another aspect of the present application, there is provided a computer device for presenting augmented reality data, the device comprising:
A processor; and
a memory arranged to store computer executable instructions that, when executed, cause the processor to:
in a multi-user video scene of a first user and a second user, positioning a first video stream corresponding to the first user based on a target anchor point, determining first real-time pose information of the first user equipment relative to the target anchor point in each current video frame of the first video stream, and providing the first real-time pose information to the second user equipment so that the second user adds at least one piece of augmented reality presentation information on the first video stream, and enabling the second user equipment to superimpose and present the at least one piece of augmented reality presentation information on the first video stream according to the first real-time pose information;
acquiring first augmented reality data provided by the second user equipment, wherein the first augmented reality data comprises object data information corresponding to the at least one piece of augmented reality presentation information;
and superposing and presenting the at least one piece of augmented reality presentation information on the first video stream according to the first augmented reality data and the first real-time pose information.
According to another aspect of the present application, there is provided a computer device for presenting augmented reality data, the device comprising:
a processor; and
a memory arranged to store computer executable instructions that, when executed, cause the processor to:
acquiring first real-time pose information, provided by first user equipment, of a target anchor point in each current video frame of a first video stream corresponding to a first user, of the first user equipment in a multi-user video scene of the first user and a second user;
obtaining first augmented reality data corresponding to at least one piece of augmented reality presentation information added by the second user on the first video stream, wherein the first augmented reality data comprises object data information corresponding to the at least one piece of augmented reality presentation information, and the object data information comprises first position information of the at least one piece of augmented reality presentation information relative to the target anchor point;
determining presentation position information of the at least one piece of augmented reality presentation information on a current video frame of the first video stream according to the first augmented reality data and the first real-time pose information;
And superposing and presenting the at least one augmented reality presentation information on the current video frame of the first video stream according to the presentation position information.
According to one aspect of the application, there is provided a computer readable medium storing instructions that, when executed, cause a system to:
in a multi-user video scene of a first user and a second user, positioning a first video stream corresponding to the first user based on a target anchor point, determining first real-time pose information of the first user equipment in each current video frame of the first video stream relative to the target anchor point, and providing the first real-time pose information to the second user equipment;
providing first augmented reality data to a second user device, so that the at least one piece of augmented reality presentation information is presented on the first video stream in a superposition mode on the second user device according to the first augmented reality data and the first real-time pose information, wherein the first augmented reality data comprises scene data information corresponding to at least one scene associated with the target anchor point and object data information corresponding to at least one piece of augmented reality presentation information located in the at least one scene.
According to another aspect of the application, there is provided a computer readable medium storing instructions that, when executed, cause a system to:
acquiring first real-time pose information, provided by first user equipment, of a target anchor point in each current video frame of a first video stream corresponding to a first user, of the first user equipment in a multi-user video scene of the first user and a second user;
in response to obtaining first augmented reality data provided by the first user equipment, determining presentation position information of at least one piece of augmented reality presentation information on a current video frame of the first video stream according to the first augmented reality data and the first real-time pose information, wherein the augmented reality data comprises scene data information corresponding to at least one scene associated with the target anchor point and object data information corresponding to at least one piece of augmented reality presentation information located in the at least one scene;
and superposing and presenting the at least one augmented reality presentation information on the current video frame of the first video stream according to the presentation position information.
According to another aspect of the application, there is provided a computer readable medium storing instructions that, when executed, cause a system to:
Acquiring first augmented reality data provided by a first user device in a multi-person video scene of a first user and a second user, wherein a third user is not in the multi-person video scene, and the first augmented reality data comprises anchor point data information corresponding to a target anchor point, scene data information corresponding to at least one scene associated with the target anchor point and object data information corresponding to at least one augmented reality presentation information located in the at least one scene;
and identifying the target anchor point in the camera live-action picture according to the anchor point data information, and superposing and presenting the at least one piece of augmented reality presentation information on the camera live-action picture based on the real-time pose information of the third user equipment relative to the target anchor point and the first augmented reality data.
According to another aspect of the application, there is provided a computer readable medium storing instructions that, when executed, cause a system to:
in a multi-user video scene of a first user and a second user, positioning a first video stream corresponding to the first user based on a target anchor point, determining first real-time pose information of the first user equipment relative to the target anchor point in each current video frame of the first video stream, and providing the first real-time pose information to the second user equipment so that the second user adds at least one piece of augmented reality presentation information on the first video stream, and enabling the second user equipment to superimpose and present the at least one piece of augmented reality presentation information on the first video stream according to the first real-time pose information;
Acquiring first augmented reality data provided by the second user equipment, wherein the first augmented reality data comprises object data information corresponding to the at least one piece of augmented reality presentation information;
and superposing and presenting the at least one piece of augmented reality presentation information on the first video stream according to the first augmented reality data and the first real-time pose information.
According to another aspect of the application, there is provided a computer readable medium storing instructions that, when executed, cause a system to:
acquiring first real-time pose information, provided by first user equipment, of a target anchor point in each current video frame of a first video stream corresponding to a first user, of the first user equipment in a multi-user video scene of the first user and a second user;
obtaining first augmented reality data corresponding to at least one piece of augmented reality presentation information added by the second user on the first video stream, wherein the first augmented reality data comprises object data information corresponding to the at least one piece of augmented reality presentation information, and the object data information comprises first position information of the at least one piece of augmented reality presentation information relative to the target anchor point;
Determining presentation position information of the at least one piece of augmented reality presentation information on a current video frame of the first video stream according to the first augmented reality data and the first real-time pose information;
and superposing and presenting the at least one augmented reality presentation information on the current video frame of the first video stream according to the presentation position information.
According to one aspect of the present application there is provided a first user device for presenting augmented reality data, the device comprising:
the system comprises a one-to-one module, a first user equipment and a second user equipment, wherein the one-to-one module is used for carrying out positioning operation on a first video stream corresponding to a first user based on a target anchor point in a multi-user video scene of the first user and the second user, determining first real-time pose information of the first user equipment in each current video frame of the first video stream relative to the target anchor point, and providing the first real-time pose information to the second user equipment;
and a second module configured to provide first augmented reality data to a second user device, so as to superimpose and present the at least one augmented reality presentation information on the first video stream according to the first augmented reality data and the first real-time pose information on the second user device, where the first augmented reality data includes scene data information corresponding to at least one scene associated with the target anchor point and object data information corresponding to at least one augmented reality presentation information located in the at least one scene.
According to another aspect of the present application there is provided a second user device for presenting augmented reality data, the device comprising:
the second module is used for acquiring first real-time pose information, provided by the first user equipment, of a target anchor point in each current video frame of a first video stream corresponding to the first user, of the first user equipment in a multi-user video scene of the first user and the second user;
the second module is used for responding to the acquired first augmented reality data provided by the first user equipment, and determining presentation position information of at least one augmented reality presentation information on a current video frame of the first video stream according to the first augmented reality data and the first real-time pose information, wherein the augmented reality data comprises scene data information corresponding to at least one scene associated with the target anchor point and object data information corresponding to at least one augmented reality presentation information positioned in the at least one scene;
and the second and third modules are used for superposing and presenting the at least one augmented reality presentation information on the current video frame of the first video stream according to the presentation position information.
According to another aspect of the present application, there is provided a third user device for presenting augmented reality data, the device comprising:
the system comprises three modules, a first user device and a second user device, wherein the three modules are used for acquiring first augmented reality data provided by the first user device in a multi-person video scene of the first user and the second user, wherein the third user device is not in the multi-person video scene, and the first augmented reality data comprises anchor point data information corresponding to a target anchor point, scene data information corresponding to at least one scene associated with the target anchor point and object data information corresponding to at least one augmented reality presentation information positioned in the at least one scene;
and the three-two module is used for identifying the target anchor point in the camera live-action picture according to the anchor point data information, and superposing and presenting the at least one piece of augmented reality presentation information on the camera live-action picture based on the real-time pose information of the third user equipment relative to the target anchor point and the first augmented reality data.
According to another aspect of the present application there is provided a first user device for presenting augmented reality data, the device comprising:
the fourth module is configured to perform positioning operation on a first video stream corresponding to a first user based on a target anchor point in a multi-user video scene of the first user and a second user, determine first real-time pose information of the first user equipment in each current video frame of the first video stream relative to the target anchor point, and provide the first real-time pose information to the second user equipment, so that the second user adds at least one augmented reality presentation information on the first video stream, and enable the second user equipment to superimpose and present the at least one augmented reality presentation information on the first video stream according to the first real-time pose information;
The fourth module is configured to obtain first augmented reality data provided by the second user equipment, where the first augmented reality data includes object data information corresponding to the at least one augmented reality presentation information;
and the fourth module is used for superposing and presenting the at least one piece of augmented reality presentation information on the first video stream according to the first augmented reality data and the first real-time pose information.
According to another aspect of the present application there is provided a second user device for presenting augmented reality data, the device comprising:
the system comprises a five-one module, a target anchor point and a target anchor point, wherein the five-one module is used for acquiring first real-time pose information, provided by first user equipment, of the first user equipment in each current video frame of a first video stream corresponding to the first user in a multi-person video scene of the first user and the second user;
a fifth-second module, configured to obtain first augmented reality data corresponding to at least one piece of augmented reality presentation information added by the second user on the first video stream, where the first augmented reality data includes object data information corresponding to the at least one piece of augmented reality presentation information, and the object data information includes first location information of the at least one piece of augmented reality presentation information relative to the target anchor point;
A fifth module, configured to determine presentation position information of the at least one augmented reality presentation information on a current video frame of the first video stream according to the first augmented reality data and the first real-time pose information;
and fifthly, superposing and presenting the at least one augmented reality presentation information on the current video frame of the first video stream according to the presentation position information.
Compared with the prior art, the method and the device have the advantages that in a multi-user video scene of a first user and a second user, positioning operation is carried out on a first video stream corresponding to the first user based on a target anchor point, first real-time pose information of the first user equipment in each current video frame of the first video stream relative to the target anchor point is determined, and the first real-time pose information is provided for the second user equipment; providing the first augmented reality data to the second user equipment, so that the second user equipment can view the real-time video stream of the first user in the multi-user video scene and view the AR content added at the appointed position of the first user in the real-time space where the first user is currently located according to the first augmented reality data and the first real-time pose information, the communication efficiency of the two parties can be enhanced, the communication experience of the two parties is obviously improved, and the second user equipment can conveniently and rapidly superimpose and present the AR content added at the appointed position of the first user in the real-time space on the real-time video stream of the first user through the augmented reality data, thereby meeting the AR requirements of the user.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the detailed description of non-limiting embodiments, made with reference to the accompanying drawings in which:
FIG. 1 illustrates a flow chart of a method for presenting augmented reality data according to one embodiment of the application;
FIG. 2 illustrates a flow chart of a method for presenting augmented reality data according to one embodiment of the application;
FIG. 3 illustrates a flow chart of a method for presenting augmented reality data according to one embodiment of the application;
FIG. 4 illustrates a flow chart of a method for presenting augmented reality data according to one embodiment of the application;
FIG. 5 illustrates a flow chart of a method for presenting augmented reality data according to one embodiment of the application;
FIG. 6 illustrates a first user device structure diagram for presenting augmented reality data according to one embodiment of the application;
FIG. 7 illustrates a second user device structure diagram for presenting augmented reality data according to one embodiment of the application;
FIG. 8 illustrates a third user device structure diagram for presenting augmented reality data according to one embodiment of the application;
FIG. 9 illustrates a first user device structure diagram for presenting augmented reality data according to one embodiment of the application;
FIG. 10 illustrates a second user device structure diagram for presenting augmented reality data according to one embodiment of the application;
FIG. 11 illustrates an exemplary system that may be used to implement various embodiments described in the present application.
The same or similar reference numbers in the drawings refer to the same or similar parts.
Detailed Description
The application is described in further detail below with reference to the accompanying drawings.
In one exemplary configuration of the application, the terminal, the device of the service network, and the trusted party each include one or more processors (e.g., central processing units (Central Processing Unit, CPU)), input/output interfaces, network interfaces, and memory.
The Memory may include non-volatile Memory in a computer readable medium, random access Memory (Random Access Memory, RAM) and/or non-volatile Memory, etc., such as Read Only Memory (ROM) or Flash Memory (Flash Memory). Memory is an example of computer-readable media.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase-Change Memory (PCM), programmable Random Access Memory (Programmable Random Access Memory, PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (Dynamic Random Access Memory, DRAM), other types of Random Access Memory (RAM), read-Only Memory (ROM), electrically erasable programmable read-Only Memory (EEPROM), flash Memory or other Memory technology, read-Only Memory (Compact Disc Read-Only Memory, CD-ROM), digital versatile disks (Digital Versatile Disc, DVD) or other optical storage, magnetic cassettes, magnetic tape storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by the computing device.
The device includes, but is not limited to, a user device, a network device, or a device formed by integrating a user device and a network device through a network. The user equipment includes, but is not limited to, any mobile electronic product which can perform man-machine interaction with a user (for example, perform man-machine interaction through a touch pad), such as a smart phone, a tablet computer and the like, and the mobile electronic product can adopt any operating system, such as an Android operating system, an iOS operating system and the like. The network device includes an electronic device capable of automatically performing numerical calculation and information processing according to a preset or stored instruction, and the hardware includes, but is not limited to, a microprocessor, an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), a programmable logic device (Programmable Logic Device, PLD), a field programmable gate array (Field Programmable Gate Array, FPGA), a digital signal processor (Digital Signal Processor, DSP), an embedded device, and the like. The network device includes, but is not limited to, a computer, a network host, a single network server, a plurality of network server sets, or a cloud of servers; here, the Cloud is composed of a large number of computers or network servers based on Cloud Computing (Cloud Computing), which is a kind of distributed Computing, a virtual supercomputer composed of a group of loosely coupled computer sets. Including but not limited to the internet, wide area networks, metropolitan area networks, local area networks, VPN networks, wireless Ad Hoc networks (Ad Hoc networks), and the like. Preferably, the device may be a program running on the user device, the network device, or a device formed by integrating the user device and the network device, the touch terminal, or the network device and the touch terminal through a network.
Of course, those skilled in the art will appreciate that the above-described devices are merely examples, and that other devices now known or hereafter may be present as applicable to the present application, and are intended to be within the scope of the present application and are incorporated herein by reference.
In the description of the present application, the meaning of "a plurality" is two or more unless explicitly defined otherwise.
Fig. 1 shows a flow chart of a method for presenting augmented reality data according to an embodiment of the application, the method comprising step S11 and step S12. In step S11, in a multi-user video scene of a first user and a second user, a first user device performs positioning operation on a first video stream corresponding to the first user based on a target anchor point, determines first real-time pose information of the first user device in each current video frame of the first video stream relative to the target anchor point, and provides the first real-time pose information to the second user device; in step S12, the first user device provides first augmented reality data to the second user device, so that the at least one piece of augmented reality presentation information is presented on the first video stream in a superimposed manner on the second user device according to the first augmented reality data and the first real-time pose information, wherein the first augmented reality data includes scene data information corresponding to at least one scene associated with the target anchor point and object data information corresponding to at least one piece of augmented reality presentation information located within the at least one scene.
In step S11, in a multi-user video scene of a first user and a second user, a first user device performs a positioning operation on a first video stream corresponding to the first user based on a target anchor point, determines first real-time pose information of the first user device in each current video frame of the first video stream relative to the target anchor point, and provides the first real-time pose information to the second user device. In some embodiments, the multi-person video scene includes, but is not limited to, any of a multi-person video conference, a multi-person video call. In some embodiments, the second user may be one or more, i.e., the multi-person video scene is a video conference or video call between the first user and the one or more second users. In some embodiments, the first user device is a device with a camera and a display screen, for example, the first user device may be a mobile phone, a tablet computer, or the first user device may also be a wearable AR device such as AR glasses, an AR helmet, or the like. In some embodiments, the first user obtains a first video stream corresponding to the live-action environment where the first user is currently located through the camera, and sends the first video stream to each second user in real time. In some embodiments, the target anchor point may be only one, or the target anchor point may be one or more, where the anchor point is used to express an association between physical space and digital space. In some embodiments, the target anchor point is determined before the first video stream is captured, for example, the first user device captures the current environment through the camera, so as to perform feature recognition on the object in the captured image, create an anchor point, determine the anchor point as the target anchor point, and for another example, if one or more created anchor points exist in the captured image, the first user selects the target anchor point in the one or more anchor points. And then obtaining a first video stream corresponding to the current live-action environment, and sending the first video stream to each second user in real time. In other embodiments, the target anchor point is determined after capturing the first video stream, e.g., there are currently one or more anchor points in the first video stream that have been created, the first user selects the target anchor point among the one or more anchor points, or the target anchor point is created by feature recognition of an object in the first video stream. In some embodiments, by identifying a target anchor point, a positioning operation may be performed on each current video frame of the first video stream, first real-time pose information of a camera of the first user device in each current video frame relative to the target anchor point may be determined, and the first real-time pose information may be transmitted to each second user, for example, the positioning operation may include, but is not limited to, one or more of instant positioning and mapping (SLAM), 2D identification, wifi/bluetooth identification, GPS positioning, large space positioning, and the like.
In step S12, the first user device provides first augmented reality data to the second user device, so that the at least one piece of augmented reality presentation information is presented on the first video stream in a superimposed manner on the second user device according to the first augmented reality data and the first real-time pose information, wherein the first augmented reality data includes scene data information corresponding to at least one scene associated with the target anchor point and object data information corresponding to at least one piece of augmented reality presentation information located within the at least one scene. In some embodiments, the first user may create at least one scene associated with the target anchor point and determine corresponding scene data information, where the scene data information includes, but is not limited to, one or more items of uuid (scene identification information, globally unique number), name (name), hrsObjectids (uuid array of objects), translation (three-dimensional position information), rotation (rotation angle), scale (scale), follow strategy, faceCamera (whether facing the camera), hidden (whether hidden), visual range (visual distance), sub-visual scene array, set representing sub-scenes, etc., where the follow strategy determines how entities (anchor points, scenes, objects, actions, events, etc. in the augmented reality data may be referred to as entities) are displayed in spatial positions, the follow strategy includes, but is not limited to, one or more items of follow strategy types, ud following an entity, offset of a screen position, alignment at a time of a follow strategy, a screen level, etc., and the following strategy includes at least one item of: a follow space (position relative to the space coordinate system), a follow screen (entity always displayed on screen), a follow camera (position relative to the camera can be represented by a transformation), a follow object (position relative to a certain object can be represented by a transformation), a follow link/anchor point (position relative to a certain anchor point can be represented by translation, rotation and scale); when the faceCamera is true, the entity adjusts the angle of the entity along with the camera so as to keep the front face always facing the camera; when hidden is true, the entity does not render; visual range indicates the distance from which the entity can be seen, i.e., the camera is less than visual range from the entity; the sub-scenes constitute a subset of the AR material objects to be rendered, one sub-scene only being present in one scene.
In some embodiments, the types of augmented reality presentation information (i.e., AR material objects) include, but are not limited to, one or more of 3D models, text, pictures, audio, video, web pages, PDF documents, applications, points, lines, polygons, ellipses, freebrushes, and the like. In some embodiments, the first user may set augmented reality presentation information located within each of the at least one scene and determine corresponding object data information that has an inclusion relationship with the scene data information of the scene, i.e., the scene data information of the scene may include some or all of the object data information, such as object identification information. In some embodiments, the augmented reality presentation information is the smallest unit to be rendered in the scene, and the object data information defines a particular function and its single entity of positional angular scaling. In some embodiments, the object data information includes, but is not limited to, one or more of uuid (object identification information, globally unique number), name, transform (three-dimensional position information), rotation (rotation angle), scale (zoom), follow (follow strategy), faceCamera (whether facing the camera), hidden (whether hidden), type (type of object), uri (resource address of object), visual range (visual range), attribute sets of objects, and the like, wherein the type of object determines rendering effect, specific attribute, specific action of the object.
In some embodiments, the object data information corresponding to each piece of augmented reality presentation information includes first position information of the augmented reality presentation information relative to the target anchor point, where the first position information may include only a three-dimensional position in space, preferably, the first position information may include other information besides the three-dimensional position in space, and the other information includes, but is not limited to, one or more items of gesture information (rotation angle), scale information, and the like, for example, gesture information of the augmented reality presentation information may be determined through a positioning operation, for example, an angle of the augmented reality presentation information is determined according to a gravitational acceleration, so as to ensure that the augmented reality presentation information is forward placed in a physical space. For example, a transform (three-dimensional position information) in the object data information is used to store a spatial three-dimensional position in the first position information, and, for example, a rotation (rotation angle) in the object data information is used to store posture information in the first position information. In some embodiments, the first position information of the augmented reality presentation information relative to the target anchor point may be set by the first user, or the presentation position/addition position (2D point position) of the augmented reality presentation information on a shooting picture may be set by the first user, where the shooting picture may be a picture that the first user device shoots the current environment through the camera before the multi-person video scene (i.e., before shooting the first video stream), or may be a picture of the first video stream that is shot, i.e., the augmented reality presentation information may be set before or after shooting the first video stream. The first user equipment can determine first position information of the augmented reality presentation information relative to the target anchor point according to the presentation position/the addition position, for example, 3D point clouds in a real-time computing environment after SLAM algorithm initialization is completed, and when the first user adds the augmented reality presentation information to a 2D point position in a shooting picture through a third operation, a plane under a world coordinate system is fitted by using the 3D point clouds in the current scene, so that a plane expression is obtained. Meanwhile, a ray based on a camera coordinate system is constructed through a camera optical center and augmented reality presentation information on coordinates of an image plane, then the ray is converted into a world coordinate system, an intersection point of the ray and the plane is calculated by a ray expression and a plane expression under the world coordinate system, the intersection point is a 3D space point (first position information) corresponding to a 2D point in a shooting picture, for example, two different sensors of an RGB camera and a depth camera are installed on first user equipment, a 2D image and a depth image are acquired simultaneously, when the coordinates of the 2D point of the augmented reality presentation information in the shooting picture are acquired, the depth image recorded simultaneously when the 2D image is acquired is combined, the pixel coordinates of the 2D point of the augmented reality presentation information corresponding to the image coordinates in the depth image are calculated, and then the depth information is acquired from the pixel coordinates. The depth information of the augmented reality presentation information is obtained through the steps, then a 3D space point (first position information) under a world coordinate system can be calculated, for example, a 2D point P2D of the augmented reality presentation information in a shooting picture is mapped to a straight line L3dC under a camera coordinate system, a 3D point cloud obtained by SLAM is mapped under the camera coordinate system to obtain a 3D point under the camera coordinate system, a point P3D 'C closest to the straight line L3dC in the 3D point cloud is found, a point P3dC is cut out on the L3dC by the depth of the P3D' C, and the P3dC is mapped to the P3dC under the world coordinate system to obtain a 3D space point (first position information) under the 2D world coordinate system. Or mapping the 3D point cloud in the world coordinate system to the pixel coordinate system to obtain a plurality of 2D points. A region is drawn around the point P2 d. There are multiple 2D points mapped by the 3D point cloud, denoted 2Ds, within the region. The 2Ds points are separated from the point P2d, 1) if the weights are changed according to the distances between the 2Ds points and the point P2d according to the weighted average, the weights at a short distance are large, and the weights at a long distance are small. These 2Ds points are mapped to the camera coordinate system as 3D points whose z-values are depth values. The weighted average is the (weight of each point) number of points, and according to the final depth value, the estimated point P3dC is cut out on the straight line of the camera coordinate system, and then converted to the world coordinate system, so as to obtain P3d (first position information). 2) If the depth value of the 3D point under the camera coordinate system corresponding to the 2Ds point is taken as the truncated depth value, the estimated point P3dC under the camera coordinate system is obtained, and then the estimated point P3dC is converted to the world coordinate system, so as to obtain P3D (first position information). It will be appreciated by those skilled in the art that the above-described method is merely exemplary, and that other methods of determining the first location information of the augmented reality presentation information relative to the target anchor point, as may be present, are also within the scope of the present application and are incorporated herein by reference.
In some embodiments, the scene data information and the object data information are packaged together according to a predetermined data format (e.g., JSON) to generate augmented reality data. In some embodiments, the augmented reality data is a data format generated based on a predetermined AR description standard, and the standard is not only a file format, but also a delivery format of data content when the API is called, so that an efficient, extensible and interoperable format can be provided for content transmission and loading required by the AR, differences between different rendering engines at different ends are bridged, effective utilization of resources is facilitated, and repeated development is avoided.
In some embodiments, in the process of a multi-user video scene, a first user device sends a first video stream corresponding to a first user to a second user device corresponding to each second user in real time, and after the first user device generates first augmented reality data, the first augmented reality data is directly sent to each second user, or sent to each second user device via a network device, after the second user device receives the first augmented reality data, according to first position information of each augmented reality presentation information in the first augmented reality data relative to a target anchor point and first real-time pose information of the first user device relative to the target anchor point in each current video frame of the first video stream, presentation position information of the augmented reality presentation information on the current video frame can be calculated, and according to the presentation position information, the augmented reality presentation information is overlaid and presented on the current video frame.
In some embodiments, there may be a plurality of first users in the multi-person video scene, that is, each first user may send the first augmented reality data and the first real-time pose information corresponding to the first user to other user devices corresponding to other users (other first users or second users) in the multi-person video scene, so that the other user devices superimpose and present at least one augmented reality presentation information in the first augmented reality data on the first video stream corresponding to the first user.
In some embodiments, the method further comprises: the first user equipment determines the target anchor point according to a first operation executed by the first user; determining scene data information corresponding to at least one scene associated with the target anchor point according to a second operation performed by the first user on the target anchor point; and determining object data information corresponding to at least one piece of augmented reality presentation information positioned in the at least one scene according to a third operation executed by the first user on the at least one scene. In some embodiments, the first operation may be that the first user selects a target anchor point from one or more anchor points that have been created, or the first operation may be that the first user device scans a photographed picture (for example, a first video stream, and for example, a picture photographed by the first user device through a camera in the current environment before the multi-person video scene), so as to perform feature recognition on an object in the photographed picture, create an anchor point, and determine the anchor point as the target anchor point. In some embodiments, in response to a second operation performed by the first user for the target anchor, such as a scene creation operation by the first user in the captured picture, at least one scene associated with the target anchor is created and corresponding scene data information is determined. In some embodiments, in response to a third operation performed by the first user on a certain scene in the at least one scene, such as clicking/selecting/dragging the augmented reality presentation information in the shot screen, the first user puts at least one piece of augmented reality presentation information into the scene, and determines object data information corresponding to the augmented reality presentation information, where the object data information has a containing relationship with the scene data information of the scene in which the object data information is put, that is, some or all of the object data information, such as object identification information, is contained in the scene data information.
In some embodiments, at least one of the first operation, the second operation, and the third operation is performed with respect to the first video stream. In some embodiments, at least one of the first operation, the second operation, and the third operation is an operation performed by the first user on a first video stream corresponding to the first user during the multi-person video scene, that is, the first user determines the first augmented reality data during the multi-person video scene. In some embodiments, the target anchor point may be undefined prior to the multi-person video scene, with the target anchor point, scene data information, and object data information being determined during the multi-person video scene. In some embodiments, the target anchor point may be determined prior to the multi-person video scene, with the scene data information and the object data information being determined during the multi-person video scene. In some embodiments, the target anchor point and scene data information may be determined prior to the multi-person video scene, with the object data information being determined during the multi-person video scene.
In some embodiments, at least one of the first operation, the second operation, and the third operation is an operation performed by the first user on a shooting picture corresponding to a current environment where the first user is located, where the shooting picture is acquired by a camera of the first user equipment before the multi-user video scene. In some embodiments, the target anchor point may be determined prior to the multi-person video scene, with the scene data information and the object data information being determined during the multi-person video scene. In some embodiments, the target anchor point and scene data information may be determined prior to the multi-person video scene, with the object data information being determined during the multi-person video scene. In some embodiments, the target anchor point, the scene data information, and the object data information may be determined prior to the multi-person video scene, on the basis of which the first user may also perform an update operation during the multi-person video scene, e.g., update the target anchor point, the data information, and/or the object data information, etc., during the multi-person video scene.
In some embodiments, at least one of the first operation, the second operation, and the third operation is an operation performed by the first user on a shot (for example, a first video stream, and for example, a shot of the current environment by the first user device through the camera before the multi-person video scene). For example, the operation performed on the shot screen is an input operation performed by the first user through an input device corresponding to the first user equipment, such as a touch operation on a display screen of the first user equipment.
In some embodiments, at least one target operation executed by the first user for the live-action space where the first user is currently located exists in the first operation, the second operation and the third operation; wherein the method further comprises: the target operation is identified in the photographed picture. In some embodiments, the target operation performed on the live-action space may be an operation (e.g., a gesture operation) performed by the first user in the live-action space located in front of the camera of the first user device, where the target operation may be captured by the camera, and one or more pieces of information including, but not limited to, an operation content, an operation object, an operation type, an operation position, and the like of the target operation need to be identified in the captured image.
In some embodiments, the providing the first augmented reality data to a second user device comprises: the first augmented reality data is transmitted to a network device, wherein the first augmented reality data is stored on the network device to cause the second user device to acquire the first augmented reality data from the network device. In some embodiments, the first user device may send the first augmented reality data to the network device for storage, and then the second user device needs to acquire the first augmented reality data from the network device, for example, the second user device corresponding to the second user newly joining the video conference or the video call in the process of the multi-user video scene may acquire the first augmented reality data stored in the second user device from the network device.
In some embodiments, the first augmented reality data is stored on the first user device and the second user device. In some embodiments, the first user device may store the first augmented reality data locally after generating the first augmented reality data, and the second user device may also store the first augmented reality data provided by the first user device locally after acquiring the first augmented reality data. In some embodiments, the first user may perform an update operation on the first augmented reality data during the multi-person video scene, including but not limited to at least one of adding/deleting/modifying a certain target anchor point, adding/deleting/modifying a scene associated with a certain target anchor point, adding/deleting/modifying augmented reality presentation information located within a certain scene, and synchronizing the updated first augmented reality data or delta data of the first augmented reality data to and over-storing on the second user device, or alternatively, a certain second user may perform an update operation on the first augmented reality data during the multi-person video scene and synchronize the updated first augmented reality data or delta data of the first augmented reality data to and over-storing on the first user device or the second user device.
In some embodiments, the method further comprises: and the first user equipment responds to the updating operation of the first user on the first augmented reality data, generates first incremental data, and provides the first incremental data for the second user equipment, so that the second user equipment superimposes and presents at least one piece of augmented reality presentation information on the first video stream according to the first incremental data and the first real-time pose information. The first augmented reality data of the update operation performed by the first user may be first augmented reality data generated by the first user device, or may be first augmented reality data determined after the update operation is performed by the second user, the third user, or other first users, which is not limited herein. In some embodiments, the first user may perform an update operation on the first augmented reality data, and in response to the update operation, the first user device may generate first incremental data corresponding to the first augmented reality data, and provide the first incremental data to each second user device corresponding to the first augmented reality data, for example, send the first incremental data directly to the second user device, or send the first incremental data to the second user device via the network device, or first synchronize the first incremental data to the network device for storage, and then obtain the first incremental data from the network device by the second user device. In some embodiments, the first incremental data may be generated directly in response to the update operation, or the updated first augmented reality data may be generated first and then the first incremental data may be generated by comparing the first augmented reality data before and after the update. In some embodiments, the second user device calculates presentation position information of updated augmented reality presentation information on each current video frame of the first video stream according to the first incremental data and the first real-time pose information, and superimposes and presents the updated augmented reality presentation information on the current video frame according to the presentation position information, where the updated augmented reality presentation information refers to augmented reality presentation information included after the first augmented reality data performs an update operation of the first user, where presentation position information calculation may be performed directly based on the first incremental data and the first real-time pose information, or presentation position information calculation may be performed according to the first incremental data and the first locally stored augmented reality data, where the updated first augmented reality data is obtained first, and then according to the updated first augmented reality data and the first real-time pose information.
In some embodiments, the method further comprises: the first user equipment receives second incremental data sent by the second user equipment, wherein the second incremental data is generated by the second user equipment in response to an updating operation executed by the second user for the first augmented reality data; and superposing and presenting at least one piece of augmented reality presentation information on the first video stream according to the second increment data and the first real-time pose information. In some embodiments, a certain second user may also perform an update operation with respect to the first augmented reality data, and in response to the update operation, the second user device may generate second incremental data with respect to the first augmented reality data, and provide the second incremental data to the first user device and other second user devices corresponding to other second users. In some embodiments, the first user device or the other second user devices calculate presentation position information of the updated augmented reality presentation information on each current video frame of the first video stream according to the acquired second incremental data and the first real-time pose information, and superimpose and present the updated augmented reality presentation information on the current video frame according to the presentation position information. The updated augmented reality presentation information refers to augmented reality presentation information included after the first augmented reality data performs an update operation of the second user.
In some embodiments, the first augmented reality data further includes anchor point data information corresponding to the target anchor point; the anchor point data information comprises an anchor point type, and also comprises anchor point resources or anchor point resource address information; wherein the anchor point type includes any one of the following: a picture; picture feature points; a point cloud; a point cloud map; a two-dimensional code; a two-dimensional code; a cylinder; a cube; a geographic location; a face; a bone; a wireless signal. In some embodiments, the anchor data information includes, but is not limited to, one or more of uuid (anchor identification information, globally unique number), name, type (anchor type, different anchor types corresponding to different recognition algorithms), url (resource address of anchor), snapshotImage (guide map address of anchor), width (anchor corresponds to width of physical space), height (height of anchor corresponds to physical space), attributes (attributes of anchor, different anchor types may correspond to different attributes), etc., and those skilled in the art will understand that the anchor data information may include only one item of uuid, name, type, url, snapshotImage, width, height, attributes, or may include multiple combinations of items of uuid, name, type, url, snapshotImage, width, height, attributes, without limitation. In some embodiments, different anchor types may correspond to different attributes, such as when the anchor type is a point cloud, the attributes include, but are not limited to, one or more of a corresponding algorithm name, algorithm version, a corresponding live-action screenshot, live-action screenshot address, and when the anchor type is a geographic location, the attributes include, but are not limited to, one or more of GIS coordinates, longitude, latitude, altitude. In some embodiments, if there may be multiple target anchors, the first user may set an association relationship between the target anchor and the at least one scene, that is, one target anchor may establish an association relationship with the at least one scene, and one scene may also establish an association relationship with the at least one target anchor, where a specific description manner of the association relationship may be that, in anchor data information corresponding to each target anchor, scene identification information (for example, uuid) of scene data information of the scene associated with each target anchor is added, or may also be that, in anchor data information, anchor identification information (for example, uuid) of anchor data information of the target anchor associated with each target anchor is added. In some embodiments, the anchor data information further includes at least one anchor resource corresponding to the anchor or storage address information (for example, a picture file address, a feature point file address corresponding to the picture, a point cloud file address, etc.) of the anchor resource, where the corresponding anchor resource (for example, a two-dimensional code, an identification map, a real environment, a feature point, a point cloud, location information, wireless signal information, etc. in a physical space) may be obtained through the address information.
In some embodiments, the method further comprises: the first user equipment provides the first augmented reality data to third user equipment, wherein the third user is not in the multi-person video scene, the target anchor point is identified in the camera live-action picture of the third user equipment according to the first augmented reality data, and the at least one augmented reality presentation information is presented in a superposition mode on the camera live-action picture. In some embodiments, the third user is not in the multi-person video scene, i.e., the third user is not engaged in a video call or video conference in which the first user and the second user are located. In some embodiments, the third user may be one or more. In some embodiments, the first user device may further send the first augmented reality data directly to a third user device corresponding to the third user, or send the first augmented reality data to the third user device via the network device, or the third user device may first send the first augmented reality data to the network device and store the first augmented reality data locally in the network device, and then the third user device obtains the first augmented reality data from the network device. In some embodiments, a camera on the third user device is opened, at this time, the third user device photographs a real-scene picture by opening the camera, then based on the obtained first augmented reality data provided by the first user device, according to anchor point data information (such as anchor point resources and/or other information in anchor point data information) in the first augmented reality data, a target anchor point is identified in a real environment (for example, a camera real-scene picture), if the target anchor point is identified, based on first position information in object data information corresponding to at least one augmented reality presentation information contained in a scene associated with the target anchor point and real-time pose information of the third user device relative to the target anchor point, then, based on the obtained first augmented reality presentation information, for example, a display device of the third user device displays the real-scene picture photographed by the camera, and superimposes at least one augmented reality presentation information contained in a scene associated with the anchor point on the real-scene picture displayed by the display device, for example, and if the target anchor point is identified, the first position information and the third augmented reality presentation information is displayed by the third user device, and thus, the at least one augmented reality presentation information can be displayed in a similar manner based on the real-scene picture displayed by the display device.
In some embodiments, in response to an association operation performed by the first user, the second user, or the third user, an association relationship between anchor point data information and scene data information is established, and at least one link data information for describing the association relationship is generated, where the link data information includes anchor point identification information of anchor point data information corresponding to a target anchor point and scene identification information of scene data information corresponding to a scene associated with the anchor point. The link data information defines how to use the target anchor point, and is used for describing the association relationship and/or the position relationship between the scene and the target anchor point, for example, each link data information includes identification information of one target anchor point, scene identification information of scene data information of the scene corresponding to the target anchor point and/or relative position information of the scene relative to the target anchor point, and the link data information defines where in real space the scene should be presented. In some embodiments, the link data information includes, but is not limited to, one or more of uuid (link identification information, globally unique number), name, type (link type), uuidarabnow (uuid of AR anchor point, indicating which anchor point the link is describing, a method of using), translation (three-dimensional position information, relative to a scene coordinate system), rotation (rotation angle, relative to a scene coordinate system), scale (relative to a scene coordinate system), visualization (uuid of a scene associated with the anchor point, indicating a scene uuid triggered by the anchor point), conditions (condition array, describing conditions under which the link is valid), and the like.
In some embodiments, each link data information further includes a pose relationship between the target anchor point and a scene with which the target anchor point is associated. In some embodiments, the pose relationship includes, but is not limited to, translation (three-dimensional position information, relative to a scene coordinate system), rotation (four element groups that describe rotation angles, relative to a scene coordinate system), scale (scale representing three dimensions, relative to a scene coordinate system), and the like.
In some embodiments, the method further comprises: the method comprises the steps that a first user device responds to updating operation executed by a first user for first augmented reality data, first incremental data are generated, the first incremental data are provided for third user devices, the third user devices recognize target anchor points corresponding to the first incremental data in a camera live-action picture according to the first incremental data, and at least one piece of augmented reality presentation information is presented in a superposition mode on the camera live-action picture. In some embodiments, the first user may perform an update operation on the first augmented reality data, in response to which the first user device may generate first delta data relative to the first augmented reality data and provide the first delta data to a third user device, e.g., send the first delta data directly to the third user device, or send the first delta data to the third user device via the network device, or synchronize the first delta data to the network device for storage, and then obtain the first delta data from the network device by the third user device. In some embodiments, the third user device may identify an updated target anchor point in the camera live-action picture according to the first incremental data, if the updated target anchor point is identified, determine presentation position information of the updated at least one augmented reality presentation information in the camera live-action picture based on first position information in object data information corresponding to the updated at least one augmented reality presentation information included in the updated scene associated with the updated target anchor point and real-time pose information of the third user device relative to the updated target anchor point, and superimpose and present the updated at least one augmented reality presentation information according to the presentation position information, where presentation position information may be calculated directly based on the first incremental data, or the first augmented reality data may be stored locally in the third user device, and then the updated first augmented reality data may be obtained first according to the first incremental data and the first augmented reality data stored locally, and then the presentation position information may be calculated according to the updated first augmented reality data. The updated target anchor point, the updated scene and the updated at least one piece of augmented reality presentation information refer to the target anchor point, the scene and the at least one piece of augmented reality presentation information which are included after the first augmented reality data performs the updating operation of the first user.
In some embodiments, the method further comprises: the first user equipment receives third incremental data sent by the third user equipment, wherein the third incremental data is generated by the third user equipment in response to an updating operation executed by the third user for the first augmented reality data; and superposing and presenting at least one piece of augmented reality presentation information on the first video frame stream according to the third increment data and the first real-time pose information. In some embodiments, the third user may also perform an update operation with respect to the first augmented reality data, in response to which the third user device may generate a third amount of data relative to the first augmented reality data and provide the third incremental data to the first user device and a second user device corresponding to the second user device. In some embodiments, the first user device or the second user device calculates presentation position information of the updated augmented reality presentation information on each current video frame of the first video stream according to the acquired third incremental data and the first real-time pose information, and superimposes and presents the updated augmented reality presentation information on the current video frame according to the presentation position information. The updated augmented reality presentation information refers to augmented reality presentation information included after the first augmented reality data performs an update operation of the third user.
In some embodiments, the scene data information corresponding to each scene includes object identification information of object data information included in the scene. In some embodiments, the scene data information corresponding to each scene may include object identification information (e.g., uuid) of object data information corresponding to at least one augmented reality presentation information located in the scene, where the object identification information may be included in the scene data information in the form of an array (e.g., hrsObjectids (uuid array of objects)), which is not limited herein.
In some embodiments, the at least one scene includes at least one parent scene and at least one sub-scene, and the scene data information corresponding to each parent scene further includes scene identification information or scene name information of at least one sub-scene included in the parent scene. In some embodiments, the at least one scene includes a parent scene and at least one sub-scene, and at this time, the scene data information corresponding to the parent scene further includes scene identification information (e.g., uuid) or scene name information (e.g., name) of the at least one sub-scene. In some embodiments, the parent and child scenes are organized in a parent-child hierarchy, which is a scene hierarchy defined directly using the hierarchy of json, and a parent scene is understood to be the root node of a child scene, which may also be the parent scene, and thus may also contain one or more child scenes. In some embodiments, the scene data information of the sub-scene includes, but is not limited to, one or more of uuid (globally unique number), name, transform (three-dimensional position information), rotation (rotation angle), scale (zoom), follow strategy, faceCamera (whether facing the camera), hidden (whether hidden), visual range, hrsObjectids (uuid array of objects), subsVirtualScens (sub-scene array), and the like.
In some embodiments, the object data information includes object type information, and the object data information further includes object resources or object resource address information; wherein the object type information includes any one of the following: a 3D model; characters; a picture; audio frequency; video; a web page; PDF documents; an application program; a dot; a wire; a polygon; an ellipse; a free brush. In some embodiments, the object data information further includes an object resource (i.e. AR material resource) or a storage address of the object resource, where the storage address may be a relative path, or if the encoded binary AR material resource is directly embedded in the augmented reality data, the storage address information may be a data URI, where a media field of the URI must match the encoded content.
In some embodiments, the object data information further includes specific attribute information that matches the object type. In some embodiments, the object data information further includes specific attribute information matched with the object type of the augmented reality presentation information, and the specific attribute information included in the object data information of different object types may also be different. For example, the specific attribute information of the object data information of the 3D model includes, but is not limited to, a resource address (e.g., URI), the specific attribute information of the object data information of the text type includes, but is not limited to, one or more of text content, width, height, font size, font color, background color, frame color, horizontal/vertical alignment mode, whether to display and follow an object, etc., the specific attribute information of the object data information of the picture/web page/PDF document type includes, but is not limited to, one or more of a resource address (e.g., URI), width, height, etc., the specific attribute information of the object data information of the audio type includes, but is not limited to, one or more of a resource address (e.g., URI), whether to automatically play, whether to cycle play, volume, whether to display a play control bar, etc., specific attribute information of object data information of a video type including but not limited to one or more of a resource address (e.g., URI), a width, a height, whether to automatically play, whether to loop play, a volume, whether to display a play toolbar, a play mode, etc., specific attribute information of object data information of a dot type including but not limited to one or more of a color, a size, etc., specific attribute information of object data information of a polyline type including but not limited to one or more of a color, a size, polyline data, a style of a line, etc., specific attribute information of object data information of a polygon type including but not limited to one or more of a fill color, polygon data, a bounding box size, bounding box color, bounding box style, etc., specific attribute information of object data information of an ellipse type including but not limited to a fill color, a height of an bounding box, etc., and the like, the specific attribute information of the object data information of the free brush type includes, but is not limited to, one or more of color, brush thickness, content data of the free brush, and the like.
In some embodiments, the data format of the first augmented reality data is JSON type. In some embodiments, the data format of the augmented reality data is JSON (JavaScript Object Notation, JS object numbered musical notation), that is, a data format in the form of key-value pairs (key-value), including one or more of scene data information, object data information, link data information, and the like in the augmented reality data is JSON type, and the plain text JSON file description is compact and easy to parse. In some embodiments, the augmented reality data points to external binary data. In some embodiments, the augmented reality data points to external binary data to reference AR material resources such as 3D models, images, video, audio, etc., and a separate request needs to be initiated to obtain these binary data at the time of reference. In some embodiments, the method further comprises: the computer device embeds the encoded binary data in the augmented reality data in an inline manner. In some embodiments, the AR material resources of the encoded 3D model, image, video, audio, etc. may be embedded directly in the augmented reality data by means of an inline (uniform resource identifier (URI) or Internationalized Resource Identifier (IRI)), requiring additional space due to encoding and additional processing decoding. In some embodiments, to avoid such file size and processing overhead, a container format is introduced that allows the augmented reality data to be stored in a single binary file, external resources can still be referenced, and then when the augmented reality data is used, the resources are directly loaded into the corresponding rendering container in the binary file mode, no additional parsing or processing is needed, and the combination of JSON text and binary effectively ensures the richness and integrity of AR scenes, and also reserves the independence of object resources.
Fig. 2 shows a flow chart of a method for presenting augmented reality data according to an embodiment of the application, the method comprising step S21, step S22 and step S23. In step S21, the second user device obtains, in a multi-user video scene of the first user and the second user, first real-time pose information of the first user device provided by the first user device, corresponding to the target anchor point, in each current video frame of the first video stream corresponding to the first user; in step S22, the second user equipment determines presentation position information of at least one augmented reality presentation information on a current video frame of the first video stream according to first augmented reality data and the first real-time pose information in response to acquiring the first augmented reality data provided by the first user equipment, wherein the augmented reality data comprises scene data information corresponding to at least one scene associated with the target anchor point and object data information corresponding to at least one augmented reality presentation information located in the at least one scene; in step S23, the second user device superimposes and presents the at least one augmented reality presentation information on the current video frame of the first video stream according to the presentation position information.
In step S21, the second user device obtains, in a multi-user video scene of the first user and the second user, first real-time pose information of the first user device, provided by the first user device, relative to the target anchor point in each current video frame of the first video stream corresponding to the first user. In some embodiments, the related content is described in detail above, and will not be described herein.
In step S22, the second user device determines, in response to obtaining the first augmented reality data provided by the first user device, presentation position information of at least one augmented reality presentation information on a current video frame of the first video stream according to the first augmented reality data and the first real-time pose information, where the augmented reality data includes scene data information corresponding to at least one scene associated with the target anchor point and object data information corresponding to at least one augmented reality presentation information located in the at least one scene. In some embodiments, in the process of the multi-person video scene, the first user equipment sends a first video stream corresponding to the first user to a second user equipment corresponding to the second user in real time, and the second user equipment determines, while presenting the first video stream, presentation position information of each piece of augmented reality presentation information in the first augmented reality data on each current video frame of the first video stream according to first augmented reality data provided by the first user equipment and first real-time pose information of the first user equipment relative to the target anchor point in each current video frame of the first video stream, for example, according to first position information of each piece of augmented reality presentation information in the first augmented reality data relative to the target anchor point and the first real-time pose information.
In step S23, the second user device superimposes and presents the at least one augmented reality presentation information on the current video frame of the first video stream according to the presentation position information. In some embodiments, for each of the augmented reality presentation information in the first augmented reality data, the augmented reality presentation information is presented superimposed on the current video frame according to presentation position information of the augmented reality presentation information on each of the current video frames of the first video stream.
In some embodiments, the method further comprises: a second user device receives first incremental data sent by the first user device, wherein the first incremental data is generated by the first user device in response to an updating operation executed by the first user for the first augmented reality data; and superposing and presenting at least one piece of augmented reality presentation information on the first video stream according to the first increment data and the first real-time pose information. In some embodiments, a first user may perform an update operation with respect to first augmented reality data, in response to which the first user device may generate first delta data with respect to the first augmented reality data and provide the first delta data to each of the second user devices to which the second user corresponds. In some embodiments, the second user device calculates presentation position information of the updated augmented reality presentation information on each current video frame of the first video stream according to the first incremental data and the first real-time pose information, and superimposes and presents the updated augmented reality presentation information on the current video frame according to the presentation position information. The updated augmented reality presentation information refers to augmented reality presentation information included after the first augmented reality data performs an update operation of the first user.
In some embodiments, the method further comprises: a second user device generates second incremental data in response to an update operation performed by the second user on the first augmented reality data; and sending the second incremental data to the first user equipment, so that the first user equipment superimposes and presents at least one piece of augmented reality presentation information on the first video stream according to the second incremental data and the first real-time pose information. The first augmented reality data of the update operation performed by the second user may be first augmented reality data sent by the first user device received by the second user, or may be first augmented reality data determined after the update operation is performed by the first user, the third user, or other second users, which is not limited herein. In some embodiments, a certain second user may also perform an update operation with respect to the first augmented reality data, and in response to the update operation, the second user device may generate second incremental data with respect to the first augmented reality data, and send the second incremental data to the first user device and other second user devices corresponding to other second users. In some embodiments, the first user device or the other second user devices calculate presentation position information of the updated augmented reality presentation information on each current video frame of the first video stream according to the acquired second incremental data and the first real-time pose information, and superimpose and present the updated augmented reality presentation information on the current video frame according to the presentation position information. The updated augmented reality presentation information refers to augmented reality presentation information included after the first augmented reality data performs an update operation of the second user. In some embodiments, the method further comprises: and the second user equipment sends the second incremental data to third user equipment, wherein the third user equipment identifies a target anchor point corresponding to the second incremental data in a camera live-action picture of the third user equipment according to the second incremental data when the third user is not in the multi-user video scene, and superimposes and presents at least one piece of augmented reality presentation information on the camera live-action picture. In some embodiments, the third user is not in the multi-person video scene, i.e., the third user is not engaged in a video call or video conference in which the first user and the second user are located. In some embodiments, the second user device further sends second incremental data to the third user device, the third user device identifies an updated target anchor point in the camera live-action frame according to the second incremental data, if so, determines presentation position information of the updated at least one augmented reality presentation information in the camera live-action frame based on first position information in object data information corresponding to the updated at least one augmented reality presentation information contained in the updated scene associated with the updated target anchor point and real-time pose information of the third user device relative to the updated target anchor point, and superimposes and presents the updated at least one augmented reality presentation information on the camera live-action frame according to the presentation position information. The updated target anchor point, the updated scene and the updated at least one piece of augmented reality presentation information refer to the target anchor point, the scene and the at least one piece of augmented reality presentation information which are included after the first augmented reality data performs the updating operation of the second user.
In some embodiments, the method further comprises: the second user equipment receives third incremental data sent by third user equipment, wherein the third incremental data is generated by the third user equipment in response to an updating operation of the third user on the first augmented reality data, wherein the third user equipment is not in the multi-person video scene, and the third user equipment identifies the target anchor point in a camera real picture of the third user equipment according to the first augmented reality data provided by the first user equipment and superimposes and presents the at least one augmented reality presentation information on the camera real picture; and superposing and presenting at least one piece of augmented reality presentation information on the first video frame stream according to the third increment data and the first real-time pose information. In some embodiments, the third user may also perform an update operation with respect to the first augmented reality data, in response to which the third user device may generate a third amount of data relative to the first augmented reality data and provide the third incremental data to the first user device and a second user device corresponding to the second user device. In some embodiments, the first user device or the second user device calculates presentation position information of the updated augmented reality presentation information on each current video frame of the first video stream according to the acquired third incremental data and the first real-time pose information, and superimposes and presents the updated augmented reality presentation information on the current video frame according to the presentation position information. The updated augmented reality presentation information refers to augmented reality presentation information included after the first augmented reality data performs an update operation of the third user.
Fig. 3 shows a flow chart of a method for presenting augmented reality data, according to an embodiment of the application, the method comprising step S31 and step S32. In step S31, a third user device obtains first augmented reality data provided by a first user device in a multi-person video scene of the first user and a second user, where the third user is not in the multi-person video scene, and the first augmented reality data includes anchor point data information corresponding to a target anchor point, scene data information corresponding to at least one scene associated with the target anchor point, and object data information corresponding to at least one augmented reality presentation information located in the at least one scene; in step S32, the third user device identifies the target anchor point in the camera live-action picture according to the anchor point data information, and superimposes and presents the at least one augmented reality presentation information on the camera live-action picture based on the real-time pose information of the third user device relative to the target anchor point and the first augmented reality data.
In step S31, a third user device obtains first augmented reality data provided by the first user device in a multi-person video scene of the first user and the second user, where the third user is not in the multi-person video scene, and the first augmented reality data includes anchor point data information corresponding to a target anchor point, scene data information corresponding to at least one scene associated with the target anchor point, and object data information corresponding to at least one augmented reality presentation information located in the at least one scene. In some embodiments, the first augmented reality data is described in detail above, and will not be described herein. In some embodiments, the third user is not in the multi-person video scene, i.e., the third user is not engaged in a video call or video conference in which the first user and the second user are located. In some embodiments, the first user device may further send the first augmented reality data directly to a third user device corresponding to the third user, or send the first augmented reality data to the third user device via the network device, or the third user device may first send the first augmented reality data to the network device and store the first augmented reality data locally in the network device, and then the third user device obtains the first augmented reality data from the network device.
In step S32, the third user device identifies the target anchor point in the camera live-action picture according to the anchor point data information, and superimposes and presents the at least one augmented reality presentation information on the camera live-action picture based on the real-time pose information of the third user device relative to the target anchor point and the first augmented reality data. In some embodiments, a camera on a third user device is opened, at this time, the third user device shoots a live-action picture by opening the camera, then based on the obtained first augmented reality data provided by the first user device, according to anchor point data information (such as anchor point resources and/or other information in anchor point data information) in the first augmented reality data, a target anchor point is identified in a real environment (for example, the camera live-action picture), if the target anchor point is identified, based on first position information in object data information corresponding to at least one augmented reality presentation information contained in a scene associated with the target anchor point and real-time pose information of the third user device relative to the target anchor point, a position of the at least one augmented reality presentation information in the camera live-action picture is determined, and the at least one augmented reality presentation information is presented on the camera live-action picture according to the position, for example, a display device of the third user device displays the live-action picture shot by the camera, and at least one augmented reality presentation information contained in the scene associated with the anchor point is presented on the live-action picture displayed on the display device, and if the at least one live-action picture associated with the target anchor point is displayed, the third user device does not see the live-action picture, so that the at least one live-action picture can be displayed on the live-action picture is displayed on the live-action picture, and the live-action picture is displayed on the live-action picture.
In some embodiments, the method further comprises: a third user device receives first incremental data sent by the first user device, wherein the first incremental data is generated by the first user device in response to an updating operation executed by the first user for the first augmented reality data; and identifying a target anchor point corresponding to the first incremental data in the camera live-action picture according to the first incremental data, and superposing and presenting at least one piece of augmented reality presentation information on the camera live-action picture. In some embodiments, the first user may perform an update operation on the first augmented reality data, in response to which the first user device may generate first delta data relative to the first augmented reality data and provide the first delta data to a third user device, e.g., send the first delta data directly to the third user device, or send the first delta data to the third user device via the network device, or synchronize the first delta data to the network device for storage, and then obtain the first delta data from the network device by the third user device. In some embodiments, the third user device identifies an updated target anchor point in the camera real scene according to the first incremental data, and if so, determines presentation position information of the updated at least one augmented reality presentation information in the camera real scene based on first position information in object data information corresponding to the updated at least one augmented reality presentation information contained in the updated scene associated with the updated target anchor point and real-time pose information of the third user device relative to the updated target anchor point, and superimposes and presents the updated at least one augmented reality presentation information on the camera real scene according to the presentation position information. The updated target anchor point, the updated scene and the updated at least one piece of augmented reality presentation information refer to the target anchor point, the scene and the at least one piece of augmented reality presentation information which are included after the first augmented reality data performs the updating operation of the first user.
In some embodiments, the method further comprises: a third user device receives second incremental data sent by a second user device, wherein the second incremental data is generated by the second user device in response to an updating operation performed by the second user on the first augmented reality data; and identifying a target anchor point corresponding to the second incremental data in the camera live-action picture according to the second incremental data, and superposing and presenting at least one piece of augmented reality presentation information on the camera live-action picture. In some embodiments, a certain second user may also perform an update operation on the first augmented reality data, in response to the update operation, the second user device may generate second incremental data corresponding to the first augmented reality data, send the second incremental data to the third user device, identify an updated target anchor point in the camera real scene according to the second incremental data, and if so, determine presentation position information of the updated at least one augmented reality presentation information in the camera real scene based on first position information in object data information corresponding to the updated at least one augmented reality presentation information included in the updated scene associated with the updated target anchor point and real-time pose information of the third user device relative to the target anchor point, and superimpose and present the updated at least one augmented reality presentation information on the camera real scene according to the presentation position information. The updated target anchor point, the updated scene and the updated at least one piece of augmented reality presentation information refer to the target anchor point, the scene and the at least one piece of augmented reality presentation information which are included after the first augmented reality data performs the updating operation of the second user. In some embodiments, the method further comprises: a third user device generates third incremental data in response to an update operation performed by the third user on the first augmented reality data; and sending the third incremental data to second user equipment, so that the second user equipment superimposes and presents at least one piece of augmented reality presentation information on the first video stream according to the third incremental data and the first real-time pose information. The first augmented reality data of the update operation performed by the third user may be first augmented reality data sent by the first user device received by the third user, or may be first augmented reality data determined after the update operation is performed by the first user, the second user, or other third users, which is not limited herein. In some embodiments, the third user may also perform an update operation on the first augmented reality data, in response to the update operation, the third user device may generate a third amount of data corresponding to the first augmented reality data, send the third amount of data to a second user device corresponding to the second user, calculate, according to the acquired third amount of data and the first real-time pose information, presentation position information of the updated augmented reality presentation information on each current video frame of the first video stream, and superimpose and present the updated augmented reality presentation information on the current video frame according to the presentation position information. The updated augmented reality presentation information refers to augmented reality presentation information included after the first augmented reality data performs an update operation of the third user.
In some embodiments, the method further comprises: and the third incremental data is sent to the first user equipment by the third user equipment, so that the first user equipment superimposes and presents at least one piece of augmented reality presentation information on the first video stream according to the third incremental data and the first real-time pose information. In some embodiments, the third user device further sends third incremental data to the first user device, and the first user device calculates presentation position information of the updated augmented reality presentation information on each current video frame of the first video stream according to the acquired third incremental data and the first real-time pose information, and superimposes and presents the updated augmented reality presentation information on the current video frame according to the presentation position information. The updated augmented reality presentation information refers to augmented reality presentation information included after the first augmented reality data performs an update operation of the third user.
Fig. 4 shows a flow chart of a method for presenting augmented reality data according to an embodiment of the application, the method comprising step S41, step S42 and step S43. In step S41, in a multi-user video scene of a first user and a second user, performing positioning operation on a first video stream corresponding to the first user based on a target anchor point, determining first real-time pose information of the first user equipment in each current video frame of the first video stream relative to the target anchor point, and providing the first real-time pose information to a second user equipment, so that the second user adds at least one piece of augmented reality presentation information on the first video stream, and enabling the second user equipment to superimpose and present the at least one piece of augmented reality presentation information on the first video stream according to the first real-time pose information; in step S42, the first user equipment acquires first augmented reality data provided by the second user equipment, where the first augmented reality data includes object data information corresponding to the at least one augmented reality presentation information; in step S43, the first user equipment superimposes and presents the at least one augmented reality presentation information on the first video stream according to the first augmented reality data and the first real-time pose information.
In step S41, in a multi-user video scene of a first user and a second user, a first user device performs positioning operation on a first video stream corresponding to the first user based on a target anchor point, determines first real-time pose information of the first user device in each current video frame of the first video stream relative to the target anchor point, and provides the first real-time pose information to a second user device, so that the second user device adds at least one piece of augmented reality presentation information on the first video stream, and the second user device superimposes and presents the at least one piece of augmented reality presentation information on the first video stream according to the first real-time pose information. In some embodiments, the multi-person video scene includes, but is not limited to, any of a multi-person video conference, a multi-person video call. In some embodiments, the second user may be one or more, i.e., the multi-person video scene is a video conference or video call between the first user and the one or more second users. In some embodiments, the first user device is a device with a camera and a display screen, for example, the first user device may be a mobile phone, a tablet computer, or the first user device may also be a wearable AR device such as AR glasses, an AR helmet, or the like. In some embodiments, the first user obtains a first video stream corresponding to the live-action environment where the first user is currently located through the camera, and sends the first video stream to each second user in real time. In some embodiments, the target anchor point may be only one, or the target anchor point may be one or more, where the anchor point is used to express an association between physical space and digital space. In some embodiments, the target anchor point is determined before the first video stream is captured, for example, the first user device captures the current environment through the camera, so as to perform feature recognition on the object in the captured image, create an anchor point, determine the anchor point as the target anchor point, and for another example, if one or more created anchor points exist in the captured image, the first user selects the target anchor point in the one or more anchor points. And then obtaining a first video stream corresponding to the current live-action environment, and sending the first video stream to each second user in real time. In other embodiments, the target anchor point is determined after capturing the first video stream, e.g., there are currently one or more anchor points in the first video stream that have been created, the first user selects the target anchor point among the one or more anchor points, or the target anchor point is created by feature recognition of an object in the first video stream. In some embodiments, by identifying a target anchor point, a positioning operation may be performed on each current video frame of the first video stream, first real-time pose information of a camera of the first user device in each current video frame relative to the target anchor point may be determined, and the first real-time pose information may be transmitted to each second user, for example, the positioning operation may include, but is not limited to, one or more of instant positioning and mapping (SLAM), 2D identification, wifi/bluetooth identification, GPS positioning, large space positioning, and the like. In some embodiments, in the process of the multi-user video scene, the first user device sends the first video stream corresponding to the first user to the second user device corresponding to each second user in real time, and the first user device provides the first real-time pose information to the second user device in real time, when the second user device presents the first video stream corresponding to the first user, the second user device may add at least one piece of augmented reality presentation information to the second video stream, the second user device responds to the adding operation of the second user, and generates corresponding first augmented reality data, where the first augmented reality data includes object data information corresponding to the at least one piece of augmented reality presentation information, or the first augmented reality data includes at least one piece of scene data information corresponding to the at least one piece of scene and object data information corresponding to the at least one piece of augmented reality presentation information added by the second user in the at least one piece of scene, where the at least one piece of scene may be added by the second user on the first video stream, and the second user device generates corresponding scene data information, or the at least one piece of augmented reality data may be added to the first user anchor point and sent to the first user device for the first user anchor point. In some embodiments, the target anchor corresponds to anchor data information, an association relationship between the anchor data information and scene data information is established in response to an association operation performed by the first user or the second user, and at least one link data information for describing the association relationship is generated, where the augmented reality presentation information, the object data information, the target anchor, the scene data information, and the link data information are described in detail above, and are not described herein again.
In some embodiments, the object data information corresponding to each piece of augmented reality presentation information includes first position information of the augmented reality presentation information relative to the target anchor point, where the first position information may include only a three-dimensional position in space, preferably, the first position information may include other information besides the three-dimensional position in space, and the other information includes, but is not limited to, one or more items of gesture information (rotation angle), scale information, and the like, for example, gesture information of the augmented reality presentation information may be determined through a positioning operation, for example, an angle of the augmented reality presentation information is determined according to a gravitational acceleration, so as to ensure that the augmented reality presentation information is forward placed in a physical space. For example, a transform (three-dimensional position information) in the object data information is used to store a spatial three-dimensional position in the first position information, and, for example, a rotation (rotation angle) in the object data information is used to store posture information in the first position information. In some embodiments, the first user device may send anchor point data information corresponding to the target anchor point to the second user device, the second user device performs positioning operation on the first video stream according to the anchor point data information based on the target anchor point, determines first position information of at least one augmented reality presentation information added by the second user relative to the target anchor point, or the second user device may send the first augmented reality data and the added position information of the at least one augmented reality presentation information in the first video stream to the first user device, the first user device performs positioning operation on the first video stream based on the target anchor point, determines first position information of at least one augmented reality presentation information added by the second user relative to the target anchor point, and updates the first augmented reality data, so that the first position information of the augmented reality presentation information relative to the target anchor point is included in the object data information corresponding to each augmented reality presentation information, and then the first user device synchronizes the updated first augmented reality data to the second user device, where the anchor point data information and the positioning operation are not described in detail.
In some embodiments, when the second user equipment presents the first video stream, according to the first position information of each piece of augmented reality presentation information in the first augmented reality data relative to the target anchor point and the first real-time pose information of the first user equipment relative to the target anchor point in each current video frame of the first video stream, the presentation position information of the augmented reality presentation information on the current video frame can be calculated, and according to the presentation position information, the augmented reality presentation information is presented on the current video frame in a superposition mode.
In step S42, the first user equipment acquires first augmented reality data provided by the second user equipment, where the first augmented reality data includes object data information corresponding to the at least one augmented reality presentation information. In some embodiments, the object data information corresponding to at least one piece of augmented reality presentation information included in the first augmented reality data provided by the second user equipment includes first position information of the at least one piece of augmented reality presentation information relative to the target anchor point, and in other embodiments, the first user equipment acquires the first augmented reality data provided by the second user equipment and the adding position information of the at least one piece of augmented reality presentation information in the first video stream, where the first augmented reality data includes the object data information corresponding to the at least one piece of augmented reality presentation information; and according to the added position information, positioning the first video stream based on the target anchor point, determining first position information of the at least one piece of augmented reality presentation information relative to the target anchor point, and updating the first augmented reality data so that the first position information is included in the object data information.
In step S43, the first user equipment superimposes and presents the at least one augmented reality presentation information on the first video stream according to the first augmented reality data and the first real-time pose information. In some embodiments, when the first user equipment presents the first video stream, according to the first position information of each piece of augmented reality presentation information in the first augmented reality data relative to the target anchor point and the first real-time pose information of the first user equipment relative to the target anchor point in each current video frame of the first video stream, the presentation position information of the augmented reality presentation information on the current video frame can be calculated, and according to the presentation position information, the augmented reality presentation information is presented on the current video frame in a superposition mode.
In some embodiments, the step S42 includes: the first user equipment acquires first augmented reality data provided by the second user equipment and adding position information of the at least one piece of augmented reality presentation information in a first video stream; according to the added position information, positioning the first video stream based on the target anchor point, determining first position information of the at least one augmented reality presentation information relative to the target anchor point, and updating the first augmented reality data so that the first position information is included in the object data information; wherein the method further comprises: and providing the updated first augmented reality data to the second user equipment, so that the second user equipment superimposes and presents the at least one piece of augmented reality presentation information on the first video stream according to the first real-time pose information and the updated first augmented reality data. In some embodiments, the second user device responds to an adding operation performed by the second user on the first video stream, obtains adding position information of at least one piece of augmented reality presentation information added by the second user in the first video stream, and generates corresponding first augmented reality data, the first augmented reality data includes object data information corresponding to the at least one piece of augmented reality presentation information, then the second user device provides the first augmented reality data and the adding position information to the first user device, the first user device performs a positioning operation on the first video stream according to the adding position information based on the target anchor point, and determines first position information of the at least one piece of augmented reality presentation information relative to the target anchor point, wherein the specific positioning operation is described in detail above, and the first user device updates the first augmented reality data and adds the first position information of the augmented reality presentation information relative to the target anchor point in object data information corresponding to each piece of augmented reality presentation information in the first augmented reality data. In some embodiments, the first user device may provide the updated first augmented reality data to the second user device, so that when the second user device presents the first video stream, according to the first real-time pose information and the updated first augmented reality data, calculate presentation position information of each piece of augmented reality presentation information in the updated first augmented reality data on a current video frame of the first video stream, and according to the presentation position information, superimpose and present the augmented reality presentation information on the current video frame.
In some embodiments, the method further comprises: the first user equipment provides the updated first augmented reality data to third user equipment, wherein the third user is not in the multi-person video scene, the target anchor point is identified in the camera live-action picture according to the updated first augmented reality data in the camera live-action picture of the third user equipment, and the at least one augmented reality presentation information is presented in a superposition mode on the camera live-action picture. In some embodiments, the third user is not in the multi-person video scene, i.e., the third user is not engaged in a video call or video conference in which the first user and the second user are located. In some embodiments, the third user may be one or more. In some embodiments, the first user device further sends the updated first augmented reality data to a third user device corresponding to the third user. In some embodiments, a camera on the third user device is opened, at this time, the third user device photographs a real-scene picture by opening the camera, then based on the updated first augmented reality data provided by the first user device, according to anchor point data information (such as anchor point resources and/or other information in the anchor point data information) in the first augmented reality data, a target anchor point is identified in a real environment (for example, the camera real-scene picture), if the target anchor point is identified, based on first position information in object data information corresponding to at least one augmented reality presentation information added by a second user in the first augmented reality data and real-time pose information of the third user device relative to the target anchor point, a position of the at least one augmented reality presentation information in the camera real-scene picture is determined, and the at least one augmented reality presentation information is presented at the position in superposition, for example, the real-scene picture photographed by the camera is displayed by a display device of the third user device, and if the target anchor point is identified, the at least one augmented reality presentation information is displayed by the display device of the third user device, and thus the at least one augmented reality presentation information is displayed in the real-scene picture, and the same as the real-scene presentation information is displayed by the display device.
In some embodiments, the method further comprises: the first user equipment provides anchor point data information corresponding to the target anchor point for the second user equipment, so that the second user equipment performs positioning operation on the first video stream according to the anchor point data information, and determines first position information of the at least one augmented reality presentation information relative to the target anchor point; and receiving first augmented reality data provided by the second user equipment, wherein the first augmented reality data comprises object data information corresponding to the at least one piece of augmented reality presentation information, and the object data information comprises the first position information. In some embodiments, the first user device may send anchor point data information corresponding to the target anchor point to the second user device, the second user device responds to an adding operation performed by the second user on the first video stream, positioning operation is performed on the first video stream according to the anchor point data information, first position information of at least one piece of augmented reality presenting information added by the first user relative to the target anchor point is determined, corresponding first augmented reality data is generated, the first augmented reality data includes object data information corresponding to the at least one piece of augmented reality presenting information, first position information of the augmented reality presenting information relative to the target anchor point is included in the object data information corresponding to each piece of augmented reality presenting information, then the second user device provides the first augmented reality data to the first user device, when the first user device presents the first video stream, presenting position information of each piece of augmented reality presenting information in the first augmented reality data on a current video frame of the first video stream is calculated according to the first real-time pose information and the first augmented reality data, and the present position information is superimposed on the current video frame according to the present position information.
In some embodiments, the first user, the second user, and/or the third user may perform an update operation on the first augmented reality data and synchronize the updated first augmented reality data or the incremental data of the first augmented reality data to other devices, where the update operation and the incremental data are described in detail above and are not described herein.
Fig. 5 shows a flowchart of a method for presenting augmented reality data, the method comprising step S51, step S52, step S53 and step S54, according to one embodiment of the application. In step S51, the second user device obtains, in a multi-user video scene of the first user and the second user, first real-time pose information of the first user device provided by the first user device, corresponding to the target anchor point, in each current video frame of the first video stream corresponding to the first user; in step S52, the second user device obtains first augmented reality data corresponding to at least one piece of augmented reality presentation information added by the second user on the first video stream, where the first augmented reality data includes object data information corresponding to the at least one piece of augmented reality presentation information, and the object data information includes first location information of the at least one piece of augmented reality presentation information relative to the target anchor point; in step S53, the second user equipment determines, according to the first augmented reality data and the first real-time pose information, presentation position information of the at least one augmented reality presentation information on a current video frame of the first video stream; in step S54, the second user device superimposes and presents the at least one augmented reality presentation information on the current video frame of the first video stream according to the presentation position information.
In step S51, the second user device obtains, in a multi-user video scene of the first user and the second user, first real-time pose information of the first user device corresponding to the first user, relative to the target anchor point, in each current video frame of the first video stream corresponding to the first user. In some embodiments, the multi-person video scene includes, but is not limited to, any of a multi-person video conference, a multi-person video call. In some embodiments, the second user may be one or more, i.e., the multi-person video scene is a video conference or video call between the first user and the one or more second users. In some embodiments, the first user device is a device with a camera and a display screen, for example, the first user device may be a mobile phone, a tablet computer, or the first user device may also be a wearable AR device such as AR glasses, an AR helmet, or the like. In some embodiments, the first user obtains a first video stream corresponding to the live-action environment where the first user is currently located through the camera, and sends the first video stream to each second user in real time. In some embodiments, the target anchor point may be only one, or the target anchor point may be one or more, where the anchor point is used to express an association between physical space and digital space. In some embodiments, the target anchor point is determined before the first video stream is captured, for example, the first user device captures the current environment through the camera, so as to perform feature recognition on the object in the captured image, create an anchor point, determine the anchor point as the target anchor point, and for another example, if one or more created anchor points exist in the captured image, the first user selects the target anchor point in the one or more anchor points. And then obtaining a first video stream corresponding to the current live-action environment, and sending the first video stream to each second user in real time. In other embodiments, the target anchor point is determined after capturing the first video stream, e.g., there are currently one or more anchor points in the first video stream that have been created, the first user selects the target anchor point among the one or more anchor points, or the target anchor point is created by feature recognition of an object in the first video stream. In some embodiments, by identifying a target anchor point, a positioning operation may be performed on each current video frame of the first video stream, first real-time pose information of a camera of the first user device in each current video frame relative to the target anchor point may be determined, and the first real-time pose information may be transmitted to each second user, for example, the positioning operation may include, but is not limited to, one or more of instant positioning and mapping (SLAM), 2D identification, wifi/bluetooth identification, GPS positioning, large space positioning, and the like. In some embodiments, in the process of the multi-user video scene, the first user device sends the first video stream corresponding to the first user to the second user device corresponding to each second user in real time, and the first user device provides the first real-time pose information to the second user device in real time, when the second user device presents the first video stream corresponding to the first user, the second user device may add at least one piece of augmented reality presentation information to the second video stream, the second user device responds to the adding operation of the second user, and generates corresponding first augmented reality data, where the first augmented reality data includes object data information corresponding to the at least one piece of augmented reality presentation information, or the first augmented reality data includes at least one piece of scene data information corresponding to the at least one piece of scene and object data information corresponding to the at least one piece of augmented reality presentation information added by the second user in the at least one piece of scene, where the at least one piece of scene may be added by the second user on the first video stream, and the second user device generates corresponding scene data information, or the at least one piece of augmented reality data may be added to the first user anchor point and sent to the first user device for the first user anchor point. In some embodiments, the target anchor corresponds to anchor data information, an association relationship between the anchor data information and scene data information is established in response to an association operation performed by the first user or the second user, and at least one link data information for describing the association relationship is generated, where the augmented reality presentation information, the object data information, the target anchor, the scene data information, and the link data information are described in detail above, and are not described herein again.
In step S52, the second user device obtains first augmented reality data corresponding to at least one piece of augmented reality presentation information added by the second user on the first video stream, where the first augmented reality data includes object data information corresponding to the at least one piece of augmented reality presentation information, and the object data information includes first location information of the at least one piece of augmented reality presentation information relative to the target anchor point. In some embodiments, the object data information corresponding to each piece of augmented reality presentation information includes first position information of the augmented reality presentation information relative to the target anchor point, where the first position information may include only a three-dimensional position in space, preferably, the first position information may include other information besides the three-dimensional position in space, and the other information includes, but is not limited to, one or more items of gesture information (rotation angle), scale information, and the like, for example, gesture information of the augmented reality presentation information may be determined through a positioning operation, for example, an angle of the augmented reality presentation information is determined according to a gravitational acceleration, so as to ensure that the augmented reality presentation information is forward placed in a physical space. For example, a transform (three-dimensional position information) in the object data information is used to store a spatial three-dimensional position in the first position information, and, for example, a rotation (rotation angle) in the object data information is used to store posture information in the first position information. In some embodiments, the first user device may send anchor point data information corresponding to the target anchor point to the second user device, the second user device performs positioning operation on the first video stream according to the anchor point data information based on the target anchor point, determines first position information of at least one augmented reality presentation information added by the second user relative to the target anchor point, or the second user device may send the first augmented reality data and the added position information of the at least one augmented reality presentation information in the first video stream to the first user device, the first user device performs positioning operation on the first video stream based on the target anchor point, determines first position information of at least one augmented reality presentation information added by the second user relative to the target anchor point, and updates the first augmented reality data, so that the first position information of the augmented reality presentation information relative to the target anchor point is included in the object data information corresponding to each augmented reality presentation information, and then the first user device synchronizes the updated first augmented reality data to the second user device, where the anchor point data information and the positioning operation are not described in detail.
In some embodiments, the first user equipment obtains first augmented reality data provided by the second user equipment and adding position information of the at least one piece of augmented reality presentation information in a first video stream, where the first augmented reality data includes object data information corresponding to the at least one piece of augmented reality presentation information; and according to the added position information, positioning the first video stream based on the target anchor point, determining first position information of the at least one piece of augmented reality presentation information relative to the target anchor point, and updating the first augmented reality data so that the first position information is included in the object data information. In some embodiments, the second user device receives updated first augmented reality data provided by the first user device, where the updated first augmented reality data includes object data information corresponding to the at least one augmented reality presentation information, and each of the object data information corresponding to the augmented reality presentation information includes first location information of the augmented reality presentation information relative to the target anchor point.
In step S53, the second user device determines, according to the first augmented reality data and the first real-time pose information, presentation position information of the at least one augmented reality presentation information on a current video frame of the first video stream. In some embodiments, the second user device determines, while presenting the first video stream, presentation position information of each augmented reality presentation information in the first augmented reality data on the current video frame according to the first augmented reality data provided by the first user device currently received and the first real-time pose information of the first user device relative to the target anchor point in each current video frame of the first video stream, for example, determines the presentation position information of each augmented reality presentation information in the first video stream on each current video frame of the first video stream according to the first position information of each augmented reality presentation information in the first augmented reality data relative to the target anchor point and the first real-time pose information.
In step S54, the second user device superimposes and presents the at least one augmented reality presentation information on the current video frame of the first video stream according to the presentation position information. In some embodiments, for each of the augmented reality presentation information in the first augmented reality data, the augmented reality presentation information is presented superimposed on the current video frame according to presentation position information of the augmented reality presentation information on each of the current video frames of the first video stream.
In some embodiments, the step S52 includes: the method comprises the steps that a second user device responds to an adding operation executed by a second user on a first video stream, first augmented reality data corresponding to at least one piece of augmented reality presentation information and adding position information of the at least one piece of augmented reality presentation information in the first video stream are determined, wherein the first augmented reality data comprises object data information corresponding to the at least one piece of augmented reality presentation information; providing the first augmented reality data and the added position information to the first user equipment, so that the first user equipment performs positioning operation on the first video stream based on the target anchor point according to the added position information, determines first position information of the at least one augmented reality presentation information relative to the target anchor point, and updates the first augmented reality data so that the first position information is included in the object data information; and receiving updated first augmented reality data provided by the first user equipment. In some embodiments, the second user device responds to an adding operation performed by the second user on the first video stream, obtains adding position information of at least one piece of augmented reality presentation information added by the second user in the first video stream, and generates corresponding first augmented reality data, the first augmented reality data includes object data information corresponding to the at least one piece of augmented reality presentation information, then the second user device provides the first augmented reality data and the adding position information to the first user device, the first user device performs a positioning operation on the first video stream according to the adding position information based on the target anchor point, and determines first position information of the at least one piece of augmented reality presentation information relative to the target anchor point, wherein the specific positioning operation is described in detail above, and the first user device updates the first augmented reality data and adds the first position information of the augmented reality presentation information relative to the target anchor point in object data information corresponding to each piece of augmented reality presentation information in the first augmented reality data. In some embodiments, the first user device may provide the updated first augmented reality data to the second user device, so that when the second user device presents the first video stream, according to the first real-time pose information and the updated first augmented reality data, calculate presentation position information of each piece of augmented reality presentation information in the updated first augmented reality data on a current video frame of the first video stream, and according to the presentation position information, superimpose and present the augmented reality presentation information on the current video frame.
In some embodiments, the method further comprises: the second user equipment acquires anchor point data information corresponding to the target anchor point provided by the first user equipment; wherein, the step S52 includes: determining first augmented reality data corresponding to at least one piece of augmented reality presentation information in response to an adding operation performed by the second user on the first video stream, wherein the first augmented reality data comprises object data information corresponding to the at least one piece of augmented reality presentation information; positioning the first video stream according to the anchor point data information, and determining first position information of the at least one augmented reality presentation information relative to the target anchor point so that the object data information comprises the first position information; and providing the first augmented reality data to the first user equipment, so that the first user equipment superimposes and presents the at least one piece of augmented reality presentation information on the first video stream according to the first augmented reality data and the first real-time pose information. In some embodiments, the first user device may send anchor point data information corresponding to the target anchor point to the second user device, the second user device performs positioning operation on the first video stream according to the anchor point data information in response to an adding operation performed by the second user on the first video stream, determines first position information of at least one augmented reality presentation information added by the first user relative to the target anchor point, and generates corresponding first augmented reality data, where the first augmented reality data includes object data information corresponding to the at least one augmented reality presentation information, and each of the object data information corresponding to the augmented reality presentation information includes first position information of the augmented reality presentation information relative to the target anchor point, and then the second user device provides the first augmented reality data to the first user device.
In some embodiments, the method further comprises: and the second user equipment provides the first augmented reality data to third user equipment, wherein the third user is not in the multi-person video scene, the target anchor point is identified in the camera live-action picture of the third user equipment according to the first augmented reality data, and the at least one augmented reality presentation information is presented in a superposition mode on the camera live-action picture. In some embodiments, the third user is not in the multi-person video scene, i.e., the third user is not engaged in a video call or video conference in which the first user and the second user are located. In some embodiments, the third user may be one or more. In some embodiments, the second user device may send the generated first augmented reality data to a third user device corresponding to the third user. In some embodiments, a camera on the third user device is opened, at this time, the third user device photographs a real-scene picture by opening the camera, then based on the obtained first augmented reality data provided by the second user device, according to anchor point data information (such as anchor point resources and/or other information in the anchor point data information) in the first augmented reality data, a target anchor point is identified in a real environment (for example, the camera real-scene picture), if the target anchor point is identified, based on first position information in object data information corresponding to at least one augmented reality presentation information added by the second user in the first augmented reality data and real-time pose information of the third user device relative to the target anchor point, a position of the at least one augmented reality presentation information in the camera real-scene picture is determined, and the at least one augmented reality presentation information is presented at the position in a superposition manner, for example, the real-scene picture photographed by the camera is displayed by a display device of the third user device, and if the target anchor point is identified, the at least one augmented reality presentation information is not displayed by the display device of the third user device, and thus the at least one augmented reality presentation information is displayed in a real-scene picture display mode, and the same as the real-scene presentation information is displayed by the third user device.
In some embodiments, the first user, the second user, and/or the third user may perform an update operation on the first augmented reality data and synchronize the updated first augmented reality data or the incremental data of the first augmented reality data to other devices, where the update operation and the incremental data are described in detail above and are not described herein.
Fig. 6 shows a first user equipment structure for presenting augmented reality data according to an embodiment of the application, the equipment comprising a one-to-one module 11 and a two-to-two module 12. A one-to-one module 11, configured to perform positioning operation on a first video stream corresponding to a first user based on a target anchor point in a multi-user video scene of the first user and a second user, determine first real-time pose information of the first user equipment in each current video frame of the first video stream relative to the target anchor point, and provide the first real-time pose information to the second user equipment; a second module 12, configured to provide first augmented reality data to a second user device, so as to superimpose and present the at least one augmented reality presentation information on the first video stream according to the first augmented reality data and the first real-time pose information on the second user device, where the first augmented reality data includes scene data information corresponding to at least one scene associated with the target anchor point and object data information corresponding to at least one augmented reality presentation information located within the at least one scene.
And the one-to-one module 11 is configured to perform positioning operation on a first video stream corresponding to a first user based on a target anchor point in a multi-user video scene of the first user and a second user, determine first real-time pose information of the first user equipment in each current video frame of the first video stream relative to the target anchor point, and provide the first real-time pose information to the second user equipment. In some embodiments, the multi-person video scene includes, but is not limited to, any of a multi-person video conference, a multi-person video call. In some embodiments, the second user may be one or more, i.e., the multi-person video scene is a video conference or video call between the first user and the one or more second users. In some embodiments, the first user device is a device with a camera and a display screen, for example, the first user device may be a mobile phone, a tablet computer, or the first user device may also be a wearable AR device such as AR glasses, an AR helmet, or the like. In some embodiments, the first user obtains a first video stream corresponding to the live-action environment where the first user is currently located through the camera, and sends the first video stream to each second user in real time. In some embodiments, the target anchor point may be only one, or the target anchor point may be one or more, where the anchor point is used to express an association between physical space and digital space. In some embodiments, the target anchor point is determined before the first video stream is captured, for example, the first user device captures the current environment through the camera, so as to perform feature recognition on the object in the captured image, create an anchor point, determine the anchor point as the target anchor point, and for another example, if one or more created anchor points exist in the captured image, the first user selects the target anchor point in the one or more anchor points. And then obtaining a first video stream corresponding to the current live-action environment, and sending the first video stream to each second user in real time. In other embodiments, the target anchor point is determined after capturing the first video stream, e.g., there are currently one or more anchor points in the first video stream that have been created, the first user selects the target anchor point among the one or more anchor points, or the target anchor point is created by feature recognition of an object in the first video stream. In some embodiments, by identifying a target anchor point, a positioning operation may be performed on each current video frame of the first video stream, first real-time pose information of a camera of the first user device in each current video frame relative to the target anchor point may be determined, and the first real-time pose information may be transmitted to each second user, for example, the positioning operation may include, but is not limited to, one or more of instant positioning and mapping (SLAM), 2D identification, wifi/bluetooth identification, GPS positioning, large space positioning, and the like.
A second module 12, configured to provide first augmented reality data to a second user device, so as to superimpose and present the at least one augmented reality presentation information on the first video stream according to the first augmented reality data and the first real-time pose information on the second user device, where the first augmented reality data includes scene data information corresponding to at least one scene associated with the target anchor point and object data information corresponding to at least one augmented reality presentation information located within the at least one scene. In some embodiments, the first user may create at least one scene associated with the target anchor point and determine corresponding scene data information, where the scene data information includes, but is not limited to, one or more items of uuid (scene identification information, globally unique number), name (name), hrsObjectids (uuid array of objects), translation (three-dimensional position information), rotation (rotation angle), scale (scale), follow strategy, faceCamera (whether facing the camera), hidden (whether hidden), visual range (visual distance), sub-visual scene array, set representing sub-scenes, etc., where the follow strategy determines how entities (anchor points, scenes, objects, actions, events, etc. in the augmented reality data may be referred to as entities) are displayed in spatial positions, the follow strategy includes, but is not limited to, one or more items of follow strategy types, ud following an entity, offset of a screen position, alignment at a time of a follow strategy, a screen level, etc., and the following strategy includes at least one item of: a follow space (position relative to the space coordinate system), a follow screen (entity always displayed on screen), a follow camera (position relative to the camera can be represented by a transformation), a follow object (position relative to a certain object can be represented by a transformation), a follow link/anchor point (position relative to a certain anchor point can be represented by translation, rotation and scale); when the faceCamera is true, the entity adjusts the angle of the entity along with the camera so as to keep the front face always facing the camera; when hidden is true, the entity does not render; visual range indicates the distance from which the entity can be seen, i.e., the camera is less than visual range from the entity; the sub-scenes constitute a subset of the AR material objects to be rendered, one sub-scene only being present in one scene.
In some embodiments, the types of augmented reality presentation information (i.e., AR material objects) include, but are not limited to, one or more of 3D models, text, pictures, audio, video, web pages, PDF documents, applications, points, lines, polygons, ellipses, freebrushes, and the like. In some embodiments, the first user may set augmented reality presentation information located within each of the at least one scene and determine corresponding object data information that has an inclusion relationship with the scene data information of the scene, i.e., the scene data information of the scene may include some or all of the object data information, such as object identification information. In some embodiments, the augmented reality presentation information is the smallest unit to be rendered in the scene, and the object data information defines a particular function and its single entity of positional angular scaling. In some embodiments, the object data information includes, but is not limited to, one or more of uuid (object identification information, globally unique number), name, transform (three-dimensional position information), rotation (rotation angle), scale (zoom), follow (follow strategy), faceCamera (whether facing the camera), hidden (whether hidden), type (type of object), uri (resource address of object), visual range (visual range), attribute sets of objects, and the like, wherein the type of object determines rendering effect, specific attribute, specific action of the object.
In some embodiments, the object data information corresponding to each piece of augmented reality presentation information includes first position information of the augmented reality presentation information relative to the target anchor point, where the first position information may include only a three-dimensional position in space, preferably, the first position information may include other information besides the three-dimensional position in space, and the other information includes, but is not limited to, one or more items of gesture information (rotation angle), scale information, and the like, for example, gesture information of the augmented reality presentation information may be determined through a positioning operation, for example, an angle of the augmented reality presentation information is determined according to a gravitational acceleration, so as to ensure that the augmented reality presentation information is forward placed in a physical space. For example, a transform (three-dimensional position information) in the object data information is used to store a spatial three-dimensional position in the first position information, and, for example, a rotation (rotation angle) in the object data information is used to store posture information in the first position information. In some embodiments, the first position information of the augmented reality presentation information relative to the target anchor point may be set by the first user, or the presentation position/addition position (2D point position) of the augmented reality presentation information on a shooting picture may be set by the first user, where the shooting picture may be a picture that the first user device shoots the current environment through the camera before the multi-person video scene (i.e., before shooting the first video stream), or may be a picture of the first video stream that is shot, i.e., the augmented reality presentation information may be set before or after shooting the first video stream. The first user equipment can determine first position information of the augmented reality presentation information relative to the target anchor point according to the presentation position/the addition position, for example, 3D point clouds in a real-time computing environment after SLAM algorithm initialization is completed, and when the first user adds the augmented reality presentation information to a 2D point position in a shooting picture through a third operation, a plane under a world coordinate system is fitted by using the 3D point clouds in the current scene, so that a plane expression is obtained. Meanwhile, a ray based on a camera coordinate system is constructed through a camera optical center and augmented reality presentation information on coordinates of an image plane, then the ray is converted into a world coordinate system, an intersection point of the ray and the plane is calculated by a ray expression and a plane expression under the world coordinate system, the intersection point is a 3D space point (first position information) corresponding to a 2D point in a shooting picture, for example, two different sensors of an RGB camera and a depth camera are installed on first user equipment, a 2D image and a depth image are acquired simultaneously, when the coordinates of the 2D point of the augmented reality presentation information in the shooting picture are acquired, the depth image recorded simultaneously when the 2D image is acquired is combined, the pixel coordinates of the 2D point of the augmented reality presentation information corresponding to the image coordinates in the depth image are calculated, and then the depth information is acquired from the pixel coordinates. The depth information of the augmented reality presentation information is obtained through the steps, then a 3D space point (first position information) under a world coordinate system can be calculated, for example, a 2D point P2D of the augmented reality presentation information in a shooting picture is mapped to a straight line L3dC under a camera coordinate system, a 3D point cloud obtained by SLAM is mapped under the camera coordinate system to obtain a 3D point under the camera coordinate system, a point P3D 'C closest to the straight line L3dC in the 3D point cloud is found, a point P3dC is cut out on the L3dC by the depth of the P3D' C, and the P3dC is mapped to the P3dC under the world coordinate system to obtain a 3D space point (first position information) under the 2D world coordinate system. Or mapping the 3D point cloud in the world coordinate system to the pixel coordinate system to obtain a plurality of 2D points. A region is drawn around the point P2 d. There are multiple 2D points mapped by the 3D point cloud, denoted 2Ds, within the region. The 2Ds points are separated from the point P2d, 1) if the weights are changed according to the distances between the 2Ds points and the point P2d according to the weighted average, the weights at a short distance are large, and the weights at a long distance are small. These 2Ds points are mapped to the camera coordinate system as 3D points whose z-values are depth values. The weighted average is the (weight of each point) number of points, and according to the final depth value, the estimated point P3dC is cut out on the straight line of the camera coordinate system, and then converted to the world coordinate system, so as to obtain P3d (first position information). 2) If the depth value of the 3D point under the camera coordinate system corresponding to the 2Ds point is taken as the truncated depth value, the estimated point P3dC under the camera coordinate system is obtained, and then the estimated point P3dC is converted to the world coordinate system, so as to obtain P3D (first position information). It will be appreciated by those skilled in the art that the above-described method is merely exemplary, and that other methods of determining the first location information of the augmented reality presentation information relative to the target anchor point, as may be present, are also within the scope of the present application and are incorporated herein by reference.
In some embodiments, the scene data information and the object data information are packaged together according to a predetermined data format (e.g., JSON) to generate augmented reality data. In some embodiments, the augmented reality data is a data format generated based on a predetermined AR description standard, and the standard is not only a file format, but also a delivery format of data content when the API is called, so that an efficient, extensible and interoperable format can be provided for content transmission and loading required by the AR, differences between different rendering engines at different ends are bridged, effective utilization of resources is facilitated, and repeated development is avoided.
In some embodiments, in the process of a multi-user video scene, a first user device sends a first video stream corresponding to a first user to a second user device corresponding to each second user in real time, and after the first user device generates first augmented reality data, the first augmented reality data is directly sent to each second user, or sent to each second user device via a network device, after the second user device receives the first augmented reality data, according to first position information of each augmented reality presentation information in the first augmented reality data relative to a target anchor point and first real-time pose information of the first user device relative to the target anchor point in each current video frame of the first video stream, presentation position information of the augmented reality presentation information on the current video frame can be calculated, and according to the presentation position information, the augmented reality presentation information is overlaid and presented on the current video frame.
In some embodiments, there may be a plurality of first users in the multi-person video scene, that is, each first user may send the first augmented reality data and the first real-time pose information corresponding to the first user to other user devices corresponding to other users (other first users or second users) in the multi-person video scene, so that the other user devices superimpose and present at least one augmented reality presentation information in the first augmented reality data on the first video stream corresponding to the first user.
In some embodiments, the apparatus is further to: determining the target anchor point according to a first operation executed by the first user; determining scene data information corresponding to at least one scene associated with the target anchor point according to a second operation performed by the first user on the target anchor point; and determining object data information corresponding to at least one piece of augmented reality presentation information positioned in the at least one scene according to a third operation executed by the first user on the at least one scene. The related operations are the same as or similar to those of the embodiment shown in fig. 1, and thus are not described in detail herein, and are incorporated by reference.
In some embodiments, at least one of the first operation, the second operation, and the third operation is performed with respect to the first video stream. The related operations are the same as or similar to those of the embodiment shown in fig. 1, and thus are not described in detail herein, and are incorporated by reference.
In some embodiments, at least one operation performed by the first user on the photographed picture exists in the first operation, the second operation, and the third operation. The related operations are the same as or similar to those of the embodiment shown in fig. 1, and thus are not described in detail herein, and are incorporated by reference.
In some embodiments, at least one target operation executed by the first user for the live-action space where the first user is currently located exists in the first operation, the second operation and the third operation; wherein the apparatus is further for: the target operation is identified in the photographed picture. The related operations are the same as or similar to those of the embodiment shown in fig. 1, and thus are not described in detail herein, and are incorporated by reference.
In some embodiments, the providing the first augmented reality data to a second user device comprises: the first augmented reality data is transmitted to a network device, wherein the first augmented reality data is stored on the network device to cause the second user device to acquire the first augmented reality data from the network device. The related operations are the same as or similar to those of the embodiment shown in fig. 1, and thus are not described in detail herein, and are incorporated by reference.
In some embodiments, the first augmented reality data is stored on the first user device and the second user device. The related operations are the same as or similar to those of the embodiment shown in fig. 1, and thus are not described in detail herein, and are incorporated by reference.
In some embodiments, the apparatus is further to: generating first incremental data in response to an update operation performed by the first user on the first augmented reality data, and providing the first incremental data to the second user equipment, so that the second user equipment superimposes and presents at least one piece of augmented reality presentation information on the first video stream according to the first incremental data and the first real-time pose information. The related operations are the same as or similar to those of the embodiment shown in fig. 1, and thus are not described in detail herein, and are incorporated by reference.
In some embodiments, the apparatus is further to: receiving second incremental data sent by the second user equipment, wherein the second incremental data is generated by the second user equipment in response to an updating operation performed by the second user on the first augmented reality data; and superposing and presenting at least one piece of augmented reality presentation information on the first video stream according to the second increment data and the first real-time pose information. The related operations are the same as or similar to those of the embodiment shown in fig. 1, and thus are not described in detail herein, and are incorporated by reference.
In some embodiments, the first augmented reality data further includes anchor point data information corresponding to the target anchor point; the anchor point data information comprises an anchor point type, and also comprises anchor point resources or anchor point resource address information; wherein the anchor point type includes any one of the following: a picture; picture feature points; a point cloud; a point cloud map; a two-dimensional code; a two-dimensional code; a cylinder; a cube; a geographic location; a face; a bone; a wireless signal. The related operations are the same as or similar to those of the embodiment shown in fig. 1, and thus are not described in detail herein, and are incorporated by reference.
In some embodiments, the apparatus is further to: and providing the first augmented reality data to third user equipment, wherein the third user is not in the multi-person video scene, identifying the target anchor point in a camera live-action picture of the third user equipment according to the first augmented reality data, and overlaying and presenting the at least one augmented reality presentation information on the camera live-action picture. The related operations are the same as or similar to those of the embodiment shown in fig. 1, and thus are not described in detail herein, and are incorporated by reference.
In some embodiments, the apparatus is further to: generating first incremental data in response to an update operation of the first user on the first augmented reality data, and providing the first incremental data for the third user equipment, so that the third user equipment identifies a target anchor point corresponding to the first incremental data in the camera live-action picture according to the first incremental data, and superimposes and presents at least one piece of augmented reality presentation information on the camera live-action picture. The related operations are the same as or similar to those of the embodiment shown in fig. 1, and thus are not described in detail herein, and are incorporated by reference.
In some embodiments, the apparatus is further to: receiving third incremental data sent by the third user equipment, wherein the third incremental data is generated by the third user equipment in response to an updating operation performed by the third user on the first augmented reality data; and superposing and presenting at least one piece of augmented reality presentation information on the first video frame stream according to the third increment data and the first real-time pose information. The related operations are the same as or similar to those of the embodiment shown in fig. 1, and thus are not described in detail herein, and are incorporated by reference.
In some embodiments, the scene data information corresponding to each scene includes object identification information of object data information included in the scene. The related operations are the same as or similar to those of the embodiment shown in fig. 1, and thus are not described in detail herein, and are incorporated by reference.
In some embodiments, the object data information includes object type information, and the object data information further includes object resources or object resource address information; wherein the object type information includes any one of the following: a 3D model; characters; a picture; audio frequency; video; a web page; PDF documents; an application program; a dot; a wire; a polygon; an ellipse; a free brush. The related operations are the same as or similar to those of the embodiment shown in fig. 1, and thus are not described in detail herein, and are incorporated by reference.
In some embodiments, the data format of the first augmented reality data is JSON type. The related operations are the same as or similar to those of the embodiment shown in fig. 1, and thus are not described in detail herein, and are incorporated by reference.
Fig. 7 shows a second user equipment structure for presenting augmented reality data according to an embodiment of the application, the equipment comprising a two-one module 21, a two-two module 22 and a two-three module 23. The second module 21 is configured to obtain, in a multi-user video scene of a first user and a second user, first real-time pose information, provided by a first user equipment, of a target anchor point in each current video frame of a first video stream corresponding to the first user, where the first user equipment corresponds to the first user; the second-second module 22 is configured to determine, in response to obtaining first augmented reality data provided by the first user device, presentation position information of at least one augmented reality presentation information on a current video frame of the first video stream according to the first augmented reality data and the first real-time pose information, where the augmented reality data includes scene data information corresponding to at least one scene associated with the target anchor point and object data information corresponding to at least one augmented reality presentation information located in the at least one scene; in step S23, the second user equipment superimposes and presents the at least one augmented reality presentation information on the current video frame of the first video stream according to the presentation position information; and a second and third modules 23, configured to superimpose and present the at least one augmented reality presentation information on a current video frame of the first video stream according to the presentation position information.
And the second module 21 is configured to obtain, in a multi-user video scene of the first user and the second user, first real-time pose information, provided by the first user equipment, of the first user equipment relative to the target anchor point in each current video frame of the first video stream corresponding to the first user. In some embodiments, the related content is described in detail above, and will not be described herein.
And the second two modules 22 are configured to determine, in response to acquiring the first augmented reality data provided by the first user device, presentation position information of at least one augmented reality presentation information on a current video frame of the first video stream according to the first augmented reality data and the first real-time pose information, where the augmented reality data includes scene data information corresponding to at least one scene associated with the target anchor point and object data information corresponding to at least one augmented reality presentation information located in the at least one scene. In some embodiments, in the process of the multi-person video scene, the first user equipment sends a first video stream corresponding to the first user to a second user equipment corresponding to the second user in real time, and the second user equipment determines, while presenting the first video stream, presentation position information of each piece of augmented reality presentation information in the first augmented reality data on each current video frame of the first video stream according to first augmented reality data provided by the first user equipment and first real-time pose information of the first user equipment relative to the target anchor point in each current video frame of the first video stream, for example, according to first position information of each piece of augmented reality presentation information in the first augmented reality data relative to the target anchor point and the first real-time pose information.
And a second and third modules 23, configured to superimpose and present the at least one augmented reality presentation information on a current video frame of the first video stream according to the presentation position information. In some embodiments, for each of the augmented reality presentation information in the first augmented reality data, the augmented reality presentation information is presented superimposed on the current video frame according to presentation position information of the augmented reality presentation information on each of the current video frames of the first video stream.
In some embodiments, the apparatus is further to: receiving first incremental data sent by the first user equipment, wherein the first incremental data is generated by the first user equipment in response to an updating operation executed by the first user on the first augmented reality data; and superposing and presenting at least one piece of augmented reality presentation information on the first video stream according to the first increment data and the first real-time pose information. The related operations are the same as or similar to those of the embodiment shown in fig. 2, and thus are not described in detail herein, and are incorporated by reference.
In some embodiments, the apparatus is further to: generating second incremental data in response to an update operation performed by the second user on the first augmented reality data; and sending the second incremental data to the first user equipment, so that the first user equipment superimposes and presents at least one piece of augmented reality presentation information on the first video stream according to the second incremental data and the first real-time pose information. The related operations are the same as or similar to those of the embodiment shown in fig. 2, and thus are not described in detail herein, and are incorporated by reference.
In some embodiments, the apparatus is further to: and sending the second incremental data to third user equipment, wherein the third user equipment identifies a target anchor point corresponding to the second incremental data in a camera live-action picture of the third user equipment according to the second incremental data when the third user is not in the multi-person video scene, and superimposes and presents at least one piece of augmented reality presentation information on the camera live-action picture. The related operations are the same as or similar to those of the embodiment shown in fig. 2, and thus are not described in detail herein, and are incorporated by reference.
In some embodiments, the apparatus is further to: receiving third incremental data sent by third user equipment, wherein the third incremental data is generated by the third user equipment in response to an updating operation performed by the third user on the first augmented reality data, wherein the third user equipment is not in the multi-user video scene, and identifies the target anchor point in a camera live-action picture of the third user equipment according to the first augmented reality data provided by the first user equipment, and superimposes and presents the at least one augmented reality presentation information on the camera live-action picture; and superposing and presenting at least one piece of augmented reality presentation information on the first video frame stream according to the third increment data and the first real-time pose information. The related operations are the same as or similar to those of the embodiment shown in fig. 2, and thus are not described in detail herein, and are incorporated by reference.
Fig. 8 shows a third user device architecture diagram for presenting augmented reality data, the device comprising a three-one module 31 and a three-two module 32, according to one embodiment of the application. Three modules 31, configured to obtain first augmented reality data provided by a first user device in a multi-person video scene of a first user and a second user, where a third user is not in the multi-person video scene, where the first augmented reality data includes anchor point data information corresponding to a target anchor point, scene data information corresponding to at least one scene associated with the target anchor point, and object data information corresponding to at least one augmented reality presentation information located in the at least one scene; and the third-second module 32 is configured to identify the target anchor point in the camera live-action picture according to the anchor point data information, and superimpose and present the at least one augmented reality presentation information on the camera live-action picture based on the real-time pose information of the third user equipment relative to the target anchor point and the first augmented reality data.
And a third module 31, configured to obtain, in a multi-person video scene of a first user and a second user, first augmented reality data provided by a first user device, where a third user is not in the multi-person video scene, where the first augmented reality data includes anchor point data information corresponding to a target anchor point, scene data information corresponding to at least one scene associated with the target anchor point, and object data information corresponding to at least one augmented reality presentation information located in the at least one scene. In some embodiments, the first augmented reality data is described in detail above, and will not be described herein. In some embodiments, the third user is not in the multi-person video scene, i.e., the third user is not engaged in a video call or video conference in which the first user and the second user are located. In some embodiments, the first user device may further send the first augmented reality data directly to a third user device corresponding to the third user, or send the first augmented reality data to the third user device via the network device, or the third user device may first send the first augmented reality data to the network device and store the first augmented reality data locally in the network device, and then the third user device obtains the first augmented reality data from the network device.
And the third-second module 32 is configured to identify the target anchor point in the camera live-action picture according to the anchor point data information, and superimpose and present the at least one augmented reality presentation information on the camera live-action picture based on the real-time pose information of the third user equipment relative to the target anchor point and the first augmented reality data. In some embodiments, a camera on a third user device is opened, at this time, the third user device shoots a live-action picture by opening the camera, then based on the obtained first augmented reality data provided by the first user device, according to anchor point data information (such as anchor point resources and/or other information in anchor point data information) in the first augmented reality data, a target anchor point is identified in a real environment (for example, the camera live-action picture), if the target anchor point is identified, based on first position information in object data information corresponding to at least one augmented reality presentation information contained in a scene associated with the target anchor point and real-time pose information of the third user device relative to the target anchor point, a position of the at least one augmented reality presentation information in the camera live-action picture is determined, and the at least one augmented reality presentation information is presented on the camera live-action picture according to the position, for example, a display device of the third user device displays the live-action picture shot by the camera, and at least one augmented reality presentation information contained in the scene associated with the anchor point is presented on the live-action picture displayed on the display device, and if the at least one live-action picture associated with the target anchor point is displayed, the third user device does not see the live-action picture, so that the at least one live-action picture can be displayed on the live-action picture is displayed on the live-action picture, and the live-action picture is displayed on the live-action picture.
In some embodiments, the apparatus is further to: receiving first incremental data sent by the first user equipment, wherein the first incremental data is generated by the first user equipment in response to an updating operation executed by the first user on the first augmented reality data; and identifying a target anchor point corresponding to the first incremental data in the camera live-action picture according to the first incremental data, and superposing and presenting at least one piece of augmented reality presentation information on the camera live-action picture. The related operations are the same as or similar to those of the embodiment shown in fig. 3, and thus are not described in detail herein, and are incorporated by reference.
In some embodiments, the apparatus is further to: receiving second incremental data sent by second user equipment, wherein the second incremental data is generated by the second user equipment in response to an updating operation executed by the second user for the first augmented reality data; and identifying a target anchor point corresponding to the second incremental data in the camera live-action picture according to the second incremental data, and superposing and presenting at least one piece of augmented reality presentation information on the camera live-action picture. The related operations are the same as or similar to those of the embodiment shown in fig. 3, and thus are not described in detail herein, and are incorporated by reference.
In some embodiments, the apparatus is further to: generating third incremental data in response to an update operation performed by the third user on the first augmented reality data; and sending the third incremental data to second user equipment, so that the second user equipment superimposes and presents at least one piece of augmented reality presentation information on the first video stream according to the third incremental data and the first real-time pose information. The related operations are the same as or similar to those of the embodiment shown in fig. 3, and thus are not described in detail herein, and are incorporated by reference.
In some embodiments, the apparatus is further to: and sending the third incremental data to first user equipment, so that the first user equipment superimposes and presents at least one piece of augmented reality presentation information on the first video stream according to the third incremental data and the first real-time pose information. The related operations are the same as or similar to those of the embodiment shown in fig. 3, and thus are not described in detail herein, and are incorporated by reference.
Fig. 9 shows a first user equipment structure diagram for presenting augmented reality data, the device comprising a four-one module 41, a four-two module 42 and a four-three module 43, according to an embodiment of the application. Four modules 41, configured to perform positioning operation on a first video stream corresponding to a first user based on a target anchor point in a multi-user video scene of the first user and a second user, determine first real-time pose information of the first user equipment in each current video frame of the first video stream relative to the target anchor point, and provide the first real-time pose information to a second user equipment, so that the second user adds at least one augmented reality presentation information on the first video stream, and enable the second user equipment to superimpose and present the at least one augmented reality presentation information on the first video stream according to the first real-time pose information; a fourth-second module 42, configured to obtain first augmented reality data provided by the second user device, where the first augmented reality data includes object data information corresponding to the at least one augmented reality presentation information; and a fourth-third module 43, configured to superimpose and present the at least one augmented reality presentation information on the first video stream according to the first augmented reality data and the first real-time pose information.
And a fourth module 41, configured to perform a positioning operation on a first video stream corresponding to a first user based on a target anchor point in a multi-user video scene of the first user and a second user, determine first real-time pose information of the first user equipment in each current video frame of the first video stream relative to the target anchor point, and provide the first real-time pose information to the second user equipment, so that the second user adds at least one augmented reality presentation information on the first video stream, and enable the second user equipment to superimpose and present the at least one augmented reality presentation information on the first video stream according to the first real-time pose information. In some embodiments, the multi-person video scene includes, but is not limited to, any of a multi-person video conference, a multi-person video call. In some embodiments, the second user may be one or more, i.e., the multi-person video scene is a video conference or video call between the first user and the one or more second users. In some embodiments, the first user device is a device with a camera and a display screen, for example, the first user device may be a mobile phone, a tablet computer, or the first user device may also be a wearable AR device such as AR glasses, an AR helmet, or the like. In some embodiments, the first user obtains a first video stream corresponding to the live-action environment where the first user is currently located through the camera, and sends the first video stream to each second user in real time. In some embodiments, the target anchor point may be only one, or the target anchor point may be one or more, where the anchor point is used to express an association between physical space and digital space. In some embodiments, the target anchor point is determined before the first video stream is captured, for example, the first user device captures the current environment through the camera, so as to perform feature recognition on the object in the captured image, create an anchor point, determine the anchor point as the target anchor point, and for another example, if one or more created anchor points exist in the captured image, the first user selects the target anchor point in the one or more anchor points. And then obtaining a first video stream corresponding to the current live-action environment, and sending the first video stream to each second user in real time. In other embodiments, the target anchor point is determined after capturing the first video stream, e.g., there are currently one or more anchor points in the first video stream that have been created, the first user selects the target anchor point among the one or more anchor points, or the target anchor point is created by feature recognition of an object in the first video stream. In some embodiments, by identifying a target anchor point, a positioning operation may be performed on each current video frame of the first video stream, first real-time pose information of a camera of the first user device in each current video frame relative to the target anchor point may be determined, and the first real-time pose information may be transmitted to each second user, for example, the positioning operation may include, but is not limited to, one or more of instant positioning and mapping (SLAM), 2D identification, wifi/bluetooth identification, GPS positioning, large space positioning, and the like. In some embodiments, in the process of the multi-user video scene, the first user device sends the first video stream corresponding to the first user to the second user device corresponding to each second user in real time, and the first user device provides the first real-time pose information to the second user device in real time, when the second user device presents the first video stream corresponding to the first user, the second user device may add at least one piece of augmented reality presentation information to the second video stream, the second user device responds to the adding operation of the second user, and generates corresponding first augmented reality data, where the first augmented reality data includes object data information corresponding to the at least one piece of augmented reality presentation information, or the first augmented reality data includes at least one piece of scene data information corresponding to the at least one piece of scene and object data information corresponding to the at least one piece of augmented reality presentation information added by the second user in the at least one piece of scene, where the at least one piece of scene may be added by the second user on the first video stream, and the second user device generates corresponding scene data information, or the at least one piece of augmented reality data may be added to the first user anchor point and sent to the first user device for the first user anchor point. In some embodiments, the target anchor corresponds to anchor data information, an association relationship between the anchor data information and scene data information is established in response to an association operation performed by the first user or the second user, and at least one link data information for describing the association relationship is generated, where the augmented reality presentation information, the object data information, the target anchor, the scene data information, and the link data information are described in detail above, and are not described herein again.
In some embodiments, the object data information corresponding to each piece of augmented reality presentation information includes first position information of the augmented reality presentation information relative to the target anchor point, where the first position information may include only a three-dimensional position in space, preferably, the first position information may include other information besides the three-dimensional position in space, and the other information includes, but is not limited to, one or more items of gesture information (rotation angle), scale information, and the like, for example, gesture information of the augmented reality presentation information may be determined through a positioning operation, for example, an angle of the augmented reality presentation information is determined according to a gravitational acceleration, so as to ensure that the augmented reality presentation information is forward placed in a physical space. For example, a transform (three-dimensional position information) in the object data information is used to store a spatial three-dimensional position in the first position information, and, for example, a rotation (rotation angle) in the object data information is used to store posture information in the first position information. In some embodiments, the first user device may send anchor point data information corresponding to the target anchor point to the second user device, the second user device performs positioning operation on the first video stream according to the anchor point data information based on the target anchor point, determines first position information of at least one augmented reality presentation information added by the second user relative to the target anchor point, or the second user device may send the first augmented reality data and the added position information of the at least one augmented reality presentation information in the first video stream to the first user device, the first user device performs positioning operation on the first video stream based on the target anchor point, determines first position information of at least one augmented reality presentation information added by the second user relative to the target anchor point, and updates the first augmented reality data, so that the first position information of the augmented reality presentation information relative to the target anchor point is included in the object data information corresponding to each augmented reality presentation information, and then the first user device synchronizes the updated first augmented reality data to the second user device, where the anchor point data information and the positioning operation are not described in detail.
In some embodiments, when the second user equipment presents the first video stream, according to the first position information of each piece of augmented reality presentation information in the first augmented reality data relative to the target anchor point and the first real-time pose information of the first user equipment relative to the target anchor point in each current video frame of the first video stream, the presentation position information of the augmented reality presentation information on the current video frame can be calculated, and according to the presentation position information, the augmented reality presentation information is presented on the current video frame in a superposition mode.
And a fourth module 42, configured to obtain first augmented reality data provided by the second user device, where the first augmented reality data includes object data information corresponding to the at least one augmented reality presentation information. In some embodiments, the object data information corresponding to at least one piece of augmented reality presentation information included in the first augmented reality data provided by the second user equipment includes first position information of the at least one piece of augmented reality presentation information relative to the target anchor point, and in other embodiments, the first user equipment acquires the first augmented reality data provided by the second user equipment and the adding position information of the at least one piece of augmented reality presentation information in the first video stream, where the first augmented reality data includes the object data information corresponding to the at least one piece of augmented reality presentation information; and according to the added position information, positioning the first video stream based on the target anchor point, determining first position information of the at least one piece of augmented reality presentation information relative to the target anchor point, and updating the first augmented reality data so that the first position information is included in the object data information.
And a fourth-third module 43, configured to superimpose and present the at least one augmented reality presentation information on the first video stream according to the first augmented reality data and the first real-time pose information. In some embodiments, when the first user equipment presents the first video stream, according to the first position information of each piece of augmented reality presentation information in the first augmented reality data relative to the target anchor point and the first real-time pose information of the first user equipment relative to the target anchor point in each current video frame of the first video stream, the presentation position information of the augmented reality presentation information on the current video frame can be calculated, and according to the presentation position information, the augmented reality presentation information is presented on the current video frame in a superposition mode.
In some embodiments, the four-two module 42 is configured to: acquiring first augmented reality data provided by the second user equipment and adding position information of the at least one piece of augmented reality presentation information in a first video stream; according to the added position information, positioning the first video stream based on the target anchor point, determining first position information of the at least one augmented reality presentation information relative to the target anchor point, and updating the first augmented reality data so that the first position information is included in the object data information; wherein the apparatus is further for: and providing the updated first augmented reality data to the second user equipment, so that the second user equipment superimposes and presents the at least one piece of augmented reality presentation information on the first video stream according to the first real-time pose information and the updated first augmented reality data. The related operations are the same as or similar to those of the embodiment shown in fig. 4, and thus are not described in detail herein, and are incorporated by reference.
In some embodiments, the apparatus is further to: and providing the updated first augmented reality data to third user equipment, wherein the third user is not in the multi-user video scene, the target anchor point is identified in the camera live-action picture according to the updated first augmented reality data in the camera live-action picture of the third user equipment, and the at least one augmented reality presentation information is presented in a superposition mode on the camera live-action picture. The related operations are the same as or similar to those of the embodiment shown in fig. 4, and thus are not described in detail herein, and are incorporated by reference.
In some embodiments, the apparatus is further to: providing the anchor point data information corresponding to the target anchor point to the second user equipment, so that the second user equipment performs positioning operation on the first video stream according to the anchor point data information, and determines first position information of the at least one augmented reality presentation information relative to the target anchor point; and receiving first augmented reality data provided by the second user equipment, wherein the first augmented reality data comprises object data information corresponding to the at least one piece of augmented reality presentation information, and the object data information comprises the first position information. The related operations are the same as or similar to those of the embodiment shown in fig. 4, and thus are not described in detail herein, and are incorporated by reference.
Fig. 10 shows a second user device structure diagram for presenting augmented reality data, the device comprising a five-one module 51, a five-two module 52, a five-three module 53 and a five-four module 54, according to one embodiment of the application. The fifth module 51 is configured to obtain, in a multi-person video scene of a first user and a second user, first real-time pose information, provided by a first user device, of a target anchor point in each current video frame of a first video stream corresponding to the first user, where the first user device is in a first video stream corresponding to the first user; a fifth-second module 52, configured to obtain first augmented reality data corresponding to at least one piece of augmented reality presentation information added by the second user on the first video stream, where the first augmented reality data includes object data information corresponding to the at least one piece of augmented reality presentation information, and the object data information includes first location information of the at least one piece of augmented reality presentation information relative to the target anchor point; a fifth module 53, configured to determine presentation position information of the at least one augmented reality presentation information on a current video frame of the first video stream according to the first augmented reality data and the first real-time pose information; in step S54, the second user device superimposes and presents the at least one augmented reality presentation information on the current video frame of the first video stream according to the presentation position information.
The fifth module 51 is configured to obtain, in a multi-person video scene of a first user and a second user, first real-time pose information, provided by a first user device, of a target anchor point in each current video frame of a first video stream corresponding to the first user, where the first user device is in the first video stream. In some embodiments, the multi-person video scene includes, but is not limited to, any of a multi-person video conference, a multi-person video call. In some embodiments, the second user may be one or more, i.e., the multi-person video scene is a video conference or video call between the first user and the one or more second users. In some embodiments, the first user device is a device with a camera and a display screen, for example, the first user device may be a mobile phone, a tablet computer, or the first user device may also be a wearable AR device such as AR glasses, an AR helmet, or the like. In some embodiments, the first user obtains a first video stream corresponding to the live-action environment where the first user is currently located through the camera, and sends the first video stream to each second user in real time. In some embodiments, the target anchor point may be only one, or the target anchor point may be one or more, where the anchor point is used to express an association between physical space and digital space. In some embodiments, the target anchor point is determined before the first video stream is captured, for example, the first user device captures the current environment through the camera, so as to perform feature recognition on the object in the captured image, create an anchor point, determine the anchor point as the target anchor point, and for another example, if one or more created anchor points exist in the captured image, the first user selects the target anchor point in the one or more anchor points. And then obtaining a first video stream corresponding to the current live-action environment, and sending the first video stream to each second user in real time. In other embodiments, the target anchor point is determined after capturing the first video stream, e.g., there are currently one or more anchor points in the first video stream that have been created, the first user selects the target anchor point among the one or more anchor points, or the target anchor point is created by feature recognition of an object in the first video stream. In some embodiments, by identifying a target anchor point, a positioning operation may be performed on each current video frame of the first video stream, first real-time pose information of a camera of the first user device in each current video frame relative to the target anchor point may be determined, and the first real-time pose information may be transmitted to each second user, for example, the positioning operation may include, but is not limited to, one or more of instant positioning and mapping (SLAM), 2D identification, wifi/bluetooth identification, GPS positioning, large space positioning, and the like. In some embodiments, in the process of the multi-user video scene, the first user device sends the first video stream corresponding to the first user to the second user device corresponding to each second user in real time, and the first user device provides the first real-time pose information to the second user device in real time, when the second user device presents the first video stream corresponding to the first user, the second user device may add at least one piece of augmented reality presentation information to the second video stream, the second user device responds to the adding operation of the second user, and generates corresponding first augmented reality data, where the first augmented reality data includes object data information corresponding to the at least one piece of augmented reality presentation information, or the first augmented reality data includes at least one piece of scene data information corresponding to the at least one piece of scene and object data information corresponding to the at least one piece of augmented reality presentation information added by the second user in the at least one piece of scene, where the at least one piece of scene may be added by the second user on the first video stream, and the second user device generates corresponding scene data information, or the at least one piece of augmented reality data may be added to the first user anchor point and sent to the first user device for the first user anchor point. In some embodiments, the target anchor corresponds to anchor data information, an association relationship between the anchor data information and scene data information is established in response to an association operation performed by the first user or the second user, and at least one link data information for describing the association relationship is generated, where the augmented reality presentation information, the object data information, the target anchor, the scene data information, and the link data information are described in detail above, and are not described herein again.
A fifth and second module 52, configured to obtain first augmented reality data corresponding to at least one piece of augmented reality presentation information added by the second user on the first video stream, where the first augmented reality data includes object data information corresponding to the at least one piece of augmented reality presentation information, and the object data information includes first location information of the at least one piece of augmented reality presentation information relative to the target anchor point. In some embodiments, the object data information corresponding to each piece of augmented reality presentation information includes first position information of the augmented reality presentation information relative to the target anchor point, where the first position information may include only a three-dimensional position in space, preferably, the first position information may include other information besides the three-dimensional position in space, and the other information includes, but is not limited to, one or more items of gesture information (rotation angle), scale information, and the like, for example, gesture information of the augmented reality presentation information may be determined through a positioning operation, for example, an angle of the augmented reality presentation information is determined according to a gravitational acceleration, so as to ensure that the augmented reality presentation information is forward placed in a physical space. For example, a transform (three-dimensional position information) in the object data information is used to store a spatial three-dimensional position in the first position information, and, for example, a rotation (rotation angle) in the object data information is used to store posture information in the first position information. In some embodiments, the first user device may send anchor point data information corresponding to the target anchor point to the second user device, the second user device performs positioning operation on the first video stream according to the anchor point data information based on the target anchor point, determines first position information of at least one augmented reality presentation information added by the second user relative to the target anchor point, or the second user device may send the first augmented reality data and the added position information of the at least one augmented reality presentation information in the first video stream to the first user device, the first user device performs positioning operation on the first video stream based on the target anchor point, determines first position information of at least one augmented reality presentation information added by the second user relative to the target anchor point, and updates the first augmented reality data, so that the first position information of the augmented reality presentation information relative to the target anchor point is included in the object data information corresponding to each augmented reality presentation information, and then the first user device synchronizes the updated first augmented reality data to the second user device, where the anchor point data information and the positioning operation are not described in detail.
In some embodiments, the first user equipment obtains first augmented reality data provided by the second user equipment and adding position information of the at least one piece of augmented reality presentation information in a first video stream, where the first augmented reality data includes object data information corresponding to the at least one piece of augmented reality presentation information; and according to the added position information, positioning the first video stream based on the target anchor point, determining first position information of the at least one piece of augmented reality presentation information relative to the target anchor point, and updating the first augmented reality data so that the first position information is included in the object data information. In some embodiments, the second user device receives updated first augmented reality data provided by the first user device, where the updated first augmented reality data includes object data information corresponding to the at least one augmented reality presentation information, and each of the object data information corresponding to the augmented reality presentation information includes first location information of the augmented reality presentation information relative to the target anchor point.
And a fifth module 53, configured to determine presentation position information of the at least one augmented reality presentation information on a current video frame of the first video stream according to the first augmented reality data and the first real-time pose information. In some embodiments, the second user device determines, while presenting the first video stream, presentation position information of each augmented reality presentation information in the first augmented reality data on the current video frame according to the first augmented reality data provided by the first user device currently received and the first real-time pose information of the first user device relative to the target anchor point in each current video frame of the first video stream, for example, determines the presentation position information of each augmented reality presentation information in the first video stream on each current video frame of the first video stream according to the first position information of each augmented reality presentation information in the first augmented reality data relative to the target anchor point and the first real-time pose information.
And a fifth and fourth module 54, configured to superimpose and present the at least one augmented reality presentation information on a current video frame of the first video stream according to the presentation position information. In some embodiments, for each of the augmented reality presentation information in the first augmented reality data, the augmented reality presentation information is presented superimposed on the current video frame according to presentation position information of the augmented reality presentation information on each of the current video frames of the first video stream.
In some embodiments, the five-two module 52 is configured to: determining first augmented reality data corresponding to at least one piece of augmented reality presentation information and adding position information of the at least one piece of augmented reality presentation information in a first video stream in response to an adding operation performed by the second user on the first video stream, wherein the first augmented reality data comprises object data information corresponding to the at least one piece of augmented reality presentation information; providing the first augmented reality data and the added position information to the first user equipment, so that the first user equipment performs positioning operation on the first video stream based on the target anchor point according to the added position information, determines first position information of the at least one augmented reality presentation information relative to the target anchor point, and updates the first augmented reality data so that the first position information is included in the object data information; and receiving updated first augmented reality data provided by the first user equipment. The related operations are the same as or similar to those of the embodiment shown in fig. 5, and thus are not described in detail herein, and are incorporated by reference.
In some embodiments, the apparatus is further to: acquiring anchor point data information corresponding to the target anchor point provided by the first user equipment; wherein, the five-two module 52 is configured to: determining first augmented reality data corresponding to at least one piece of augmented reality presentation information in response to an adding operation performed by the second user on the first video stream, wherein the first augmented reality data comprises object data information corresponding to the at least one piece of augmented reality presentation information; positioning the first video stream according to the anchor point data information, and determining first position information of the at least one augmented reality presentation information relative to the target anchor point so that the object data information comprises the first position information; and providing the first augmented reality data to the first user equipment, so that the first user equipment superimposes and presents the at least one piece of augmented reality presentation information on the first video stream according to the first augmented reality data and the first real-time pose information. The related operations are the same as or similar to those of the embodiment shown in fig. 5, and thus are not described in detail herein, and are incorporated by reference.
In some embodiments, the apparatus is further to: and providing the first augmented reality data to third user equipment, wherein the third user is not in the multi-person video scene, identifying the target anchor point in a camera live-action picture of the third user equipment according to the first augmented reality data, and overlaying and presenting the at least one augmented reality presentation information on the camera live-action picture. The related operations are the same as or similar to those of the embodiment shown in fig. 5, and thus are not described in detail herein, and are incorporated by reference.
In addition to the methods and apparatus described in the above embodiments, the present application also provides a computer-readable storage medium storing computer code which, when executed, performs a method as described in any one of the preceding claims.
The application also provides a computer program product which, when executed by a computer device, performs a method as claimed in any preceding claim.
The present application also provides a computer device comprising:
one or more processors;
A memory for storing one or more computer programs;
the one or more computer programs, when executed by the one or more processors, cause the one or more processors to implement the method of any preceding claim.
FIG. 11 illustrates an exemplary system that may be used to implement various embodiments described in the present disclosure;
in some embodiments, as shown in fig. 11, the system 300 can function as any of the devices of the various described embodiments. In some embodiments, system 300 may include one or more computer-readable media (e.g., system memory or NVM/storage 320) having instructions and one or more processors (e.g., processor(s) 305) coupled with the one or more computer-readable media and configured to execute the instructions to implement the modules to perform the actions described in the present application.
For one embodiment, the system control module 310 may include any suitable interface controller to provide any suitable interface to at least one of the processor(s) 305 and/or any suitable device or component in communication with the system control module 310.
The system control module 310 may include a memory controller module 330 to provide an interface to the system memory 315. Memory controller module 330 may be a hardware module, a software module, and/or a firmware module.
The system memory 315 may be used, for example, to load and store data and/or instructions for the system 300. For one embodiment, system memory 315 may include any suitable volatile memory, such as, for example, a suitable DRAM. In some embodiments, the system memory 315 may comprise a double data rate type four synchronous dynamic random access memory (DDR 4 SDRAM).
For one embodiment, system control module 310 may include one or more input/output (I/O) controllers to provide an interface to NVM/storage 320 and communication interface(s) 325.
For example, NVM/storage 320 may be used to store data and/or instructions. NVM/storage 320 may include any suitable nonvolatile memory (e.g., flash memory) and/or may include any suitable nonvolatile storage device(s) (e.g., one or more Hard Disk Drives (HDDs), one or more Compact Disc (CD) drives, and/or one or more Digital Versatile Disc (DVD) drives).
NVM/storage 320 may include storage resources that are physically part of the device on which system 300 is installed or which may be accessed by the device without being part of the device. For example, NVM/storage 320 may be accessed over a network via communication interface(s) 325.
Communication interface(s) 325 may provide an interface for system 300 to communicate over one or more networks and/or with any other suitable device. The system 300 may wirelessly communicate with one or more components of a wireless network in accordance with any of one or more wireless network standards and/or protocols.
For one embodiment, at least one of the processor(s) 305 may be packaged together with logic of one or more controllers (e.g., memory controller module 330) of the system control module 310. For one embodiment, at least one of the processor(s) 305 may be packaged together with logic of one or more controllers of the system control module 310 to form a System In Package (SiP). For one embodiment, at least one of the processor(s) 305 may be integrated on the same die as logic of one or more controllers of the system control module 310. For one embodiment, at least one of the processor(s) 305 may be integrated on the same die with logic of one or more controllers of the system control module 310 to form a system on chip (SoC).
In various embodiments, the system 300 may be, but is not limited to being: a server, workstation, desktop computing device, or mobile computing device (e.g., laptop computing device, handheld computing device, tablet, netbook, etc.). In various embodiments, system 300 may have more or fewer components and/or different architectures. For example, in some embodiments, system 300 includes one or more cameras, keyboards, liquid Crystal Display (LCD) screens (including touch screen displays), non-volatile memory ports, multiple antennas, graphics chips, application Specific Integrated Circuits (ASICs), and speakers.
In addition to the methods and apparatus described in the above embodiments, the present application also provides a computer-readable storage medium storing computer code which, when executed, performs a method as described in any one of the preceding claims.
The application also provides a computer program product which, when executed by a computer device, performs a method as claimed in any preceding claim.
The present application also provides a computer device comprising:
one or more processors;
a memory for storing one or more computer programs;
the one or more computer programs, when executed by the one or more processors, cause the one or more processors to implement the method of any preceding claim.
It should be noted that the present application may be implemented in software and/or a combination of software and hardware, e.g., using Application Specific Integrated Circuits (ASIC), a general purpose computer or any other similar hardware device. In one embodiment, the software program of the present application may be executed by a processor to perform the steps or functions described above. Likewise, the software programs of the present application (including associated data structures) may be stored on a computer readable recording medium, such as RAM memory, magnetic or optical drive or diskette and the like. In addition, some steps or functions of the present application may be implemented in hardware, for example, as circuitry that cooperates with the processor to perform various steps or functions.
Furthermore, portions of the present application may be implemented as a computer program product, such as computer program instructions, which when executed by a computer, may invoke or provide methods and/or techniques in accordance with the present application by way of operation of the computer. Those skilled in the art will appreciate that the form of computer program instructions present in a computer readable medium includes, but is not limited to, source files, executable files, installation package files, etc., and accordingly, the manner in which the computer program instructions are executed by a computer includes, but is not limited to: the computer directly executes the instruction, or the computer compiles the instruction and then executes the corresponding compiled program, or the computer reads and executes the instruction, or the computer reads and installs the instruction and then executes the corresponding installed program. Herein, a computer-readable medium may be any available computer-readable storage medium or communication medium that can be accessed by a computer.
Communication media includes media whereby a communication signal containing, for example, computer readable instructions, data structures, program modules, or other data, is transferred from one system to another. Communication media may include conductive transmission media such as electrical cables and wires (e.g., optical fibers, coaxial, etc.) and wireless (non-conductive transmission) media capable of transmitting energy waves, such as acoustic, electromagnetic, RF, microwave, and infrared. Computer readable instructions, data structures, program modules, or other data may be embodied as a modulated data signal, for example, in a wireless medium, such as a carrier wave or similar mechanism, such as that embodied as part of spread spectrum technology. The term "modulated data signal" means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. The modulation may be analog, digital or hybrid modulation techniques.
By way of example, and not limitation, computer-readable storage media may include volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules or other data. For example, computer-readable storage media include, but are not limited to, volatile memory, such as random access memory (RAM, DRAM, SRAM); and nonvolatile memory such as flash memory, various read only memory (ROM, PROM, EPROM, EEPROM), magnetic and ferromagnetic/ferroelectric memory (MRAM, feRAM); and magnetic and optical storage devices (hard disk, tape, CD, DVD); or other now known media or later developed computer-readable information/data that can be stored for use by a computer system.
An embodiment according to the application comprises an apparatus comprising a memory for storing computer program instructions and a processor for executing the program instructions, wherein the computer program instructions, when executed by the processor, trigger the apparatus to operate a method and/or a solution according to the embodiments of the application as described above.
It will be evident to those skilled in the art that the application is not limited to the details of the foregoing illustrative embodiments, and that the present application may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. The present embodiments are, therefore, to be considered in all respects as illustrative and not restrictive, the scope of the application being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned. Furthermore, it is evident that the word "comprising" does not exclude other elements or steps, and that the singular does not exclude a plurality. A plurality of units or means recited in the apparatus claims can also be implemented by means of one unit or means in software or hardware. The terms first, second, etc. are used to denote a name, but not any particular order.
Claims (35)
1. A method for presenting augmented reality data, applied to a first user device, wherein the method comprises:
in a multi-user video scene of a first user and a second user, positioning a first video stream corresponding to the first user based on a target anchor point, determining first real-time pose information of the first user equipment in each current video frame of the first video stream relative to the target anchor point, and providing the first real-time pose information to the second user equipment;
providing first augmented reality data to a second user device, so that the at least one piece of augmented reality presentation information is presented on the first video stream in a superposition mode on the second user device according to the first augmented reality data and the first real-time pose information, wherein the first augmented reality data comprises scene data information corresponding to at least one scene associated with the target anchor point and object data information corresponding to at least one piece of augmented reality presentation information located in the at least one scene.
2. The method of claim 1, wherein the method further comprises:
determining the target anchor point according to a first operation executed by the first user;
Determining scene data information corresponding to at least one scene associated with the target anchor point according to a second operation performed by the first user on the target anchor point;
and determining object data information corresponding to at least one piece of augmented reality presentation information positioned in the at least one scene according to a third operation executed by the first user on the at least one scene.
3. The method of claim 2, wherein at least one of the first operation, the second operation, and the third operation is performed with respect to the first video stream.
4. The method of claim 1, wherein the providing the first augmented reality data to a second user device comprises:
the first augmented reality data is transmitted to a network device, wherein the first augmented reality data is stored on the network device to cause the second user device to acquire the first augmented reality data from the network device.
5. The method of claim 1 or 4, wherein the first augmented reality data is stored on the first user device and the second user device.
6. The method of claim 5, wherein the method further comprises:
Generating first incremental data in response to an update operation performed by the first user on the first augmented reality data, and providing the first incremental data to the second user equipment, so that the second user equipment superimposes and presents at least one piece of augmented reality presentation information on the first video stream according to the first incremental data and the first real-time pose information.
7. The method of claim 5, wherein the method further comprises:
receiving second incremental data sent by the second user equipment, wherein the second incremental data is generated by the second user equipment in response to an updating operation performed by the second user on the first augmented reality data;
and superposing and presenting at least one piece of augmented reality presentation information on the first video stream according to the second increment data and the first real-time pose information.
8. The method of claim 1, wherein the first augmented reality data further comprises anchor point data information corresponding to the target anchor point;
the anchor point data information comprises an anchor point type, and also comprises anchor point resources or anchor point resource address information;
Wherein the anchor point type includes any one of the following:
a picture;
picture feature points;
a point cloud;
a point cloud map;
a two-dimensional code;
a two-dimensional code;
a cylinder;
a cube;
a geographic location;
a face;
a bone;
a wireless signal.
9. The method of claim 1, wherein the method further comprises:
and providing the first augmented reality data to third user equipment, wherein the third user is not in the multi-person video scene, identifying the target anchor point in a camera live-action picture of the third user equipment according to the first augmented reality data, and overlaying and presenting the at least one augmented reality presentation information on the camera live-action picture.
10. The method of claim 9, wherein the method further comprises:
generating first incremental data in response to an update operation of the first user on the first augmented reality data, and providing the first incremental data for the third user equipment, so that the third user equipment identifies a target anchor point corresponding to the first incremental data in the camera live-action picture according to the first incremental data, and superimposes and presents at least one piece of augmented reality presentation information on the camera live-action picture.
11. The method according to claim 9 or 10, wherein the method further comprises
Receiving third incremental data sent by the third user equipment, wherein the third incremental data is generated by the third user equipment in response to an updating operation performed by the third user on the first augmented reality data;
and superposing and presenting at least one piece of augmented reality presentation information on the first video frame stream according to the third increment data and the first real-time pose information.
12. The method according to claim 1, wherein the scene data information corresponding to each scene includes object identification information of object data information included in the scene.
13. The method of claim 1, wherein the object data information includes object type information, and the object data information further includes object resource or object resource address information;
wherein the object type information includes any one of the following:
a 3D model;
characters;
a picture;
audio frequency;
video;
a web page;
PDF documents;
an application program;
a dot;
a wire;
a polygon;
an ellipse;
a free brush.
14. The method of claim 1, wherein the data format of the first augmented reality data is JSON type.
15. A method for presenting augmented reality data, applied to a second user device, wherein the method comprises:
acquiring first real-time pose information, provided by first user equipment, of a target anchor point in each current video frame of a first video stream corresponding to a first user, of the first user equipment in a multi-user video scene of the first user and a second user;
in response to obtaining first augmented reality data provided by the first user equipment, determining presentation position information of at least one piece of augmented reality presentation information on a current video frame of the first video stream according to the first augmented reality data and the first real-time pose information, wherein the augmented reality data comprises scene data information corresponding to at least one scene associated with the target anchor point and object data information corresponding to at least one piece of augmented reality presentation information located in the at least one scene;
and superposing and presenting the at least one augmented reality presentation information on the current video frame of the first video stream according to the presentation position information.
16. The method of claim 15, wherein the method further comprises:
Receiving first incremental data sent by the first user equipment, wherein the first incremental data is generated by the first user equipment in response to an updating operation executed by the first user on the first augmented reality data;
and superposing and presenting at least one piece of augmented reality presentation information on the first video stream according to the first increment data and the first real-time pose information.
17. The method of claim 15, wherein the method further comprises:
generating second incremental data in response to an update operation performed by the second user on the first augmented reality data;
and sending the second incremental data to the first user equipment, so that the first user equipment superimposes and presents at least one piece of augmented reality presentation information on the first video stream according to the second incremental data and the first real-time pose information.
18. The method of claim 17, wherein the method further comprises:
and sending the second incremental data to third user equipment, wherein the third user equipment identifies a target anchor point corresponding to the second incremental data in a camera live-action picture of the third user equipment according to the second incremental data when the third user is not in the multi-person video scene, and superimposes and presents at least one piece of augmented reality presentation information on the camera live-action picture.
19. The method of claim 15, wherein the method further comprises:
receiving third incremental data sent by third user equipment, wherein the third incremental data is generated by the third user equipment in response to an updating operation performed by the third user on the first augmented reality data, wherein the third user equipment is not in the multi-user video scene, and identifies the target anchor point in a camera live-action picture of the third user equipment according to the first augmented reality data provided by the first user equipment, and superimposes and presents the at least one augmented reality presentation information on the camera live-action picture;
and superposing and presenting at least one piece of augmented reality presentation information on the first video frame stream according to the third increment data and the first real-time pose information.
20. A method for presenting augmented reality data, applied to a third user device, wherein the method comprises:
acquiring first augmented reality data provided by a first user device in a multi-person video scene of a first user and a second user, wherein a third user is not in the multi-person video scene, and the first augmented reality data comprises anchor point data information corresponding to a target anchor point, scene data information corresponding to at least one scene associated with the target anchor point and object data information corresponding to at least one augmented reality presentation information located in the at least one scene;
And identifying the target anchor point in the camera live-action picture according to the anchor point data information, and superposing and presenting the at least one piece of augmented reality presentation information on the camera live-action picture based on the real-time pose information of the third user equipment relative to the target anchor point and the first augmented reality data.
21. The method of claim 20, wherein the method further comprises:
receiving first incremental data sent by the first user equipment, wherein the first incremental data is generated by the first user equipment in response to an updating operation executed by the first user on the first augmented reality data;
and identifying a target anchor point corresponding to the first incremental data in the camera live-action picture according to the first incremental data, and superposing and presenting at least one piece of augmented reality presentation information on the camera live-action picture.
22. The method of claim 20, wherein the method further comprises:
receiving second incremental data sent by second user equipment, wherein the second incremental data is generated by the second user equipment in response to an updating operation executed by the second user for the first augmented reality data;
And identifying a target anchor point corresponding to the second incremental data in the camera live-action picture according to the second incremental data, and superposing and presenting at least one piece of augmented reality presentation information on the camera live-action picture.
23. The method of claim 20, wherein the method further comprises:
generating third incremental data in response to an update operation performed by the third user on the first augmented reality data;
and sending the third incremental data to second user equipment, so that the second user equipment superimposes and presents at least one piece of augmented reality presentation information on the first video stream according to the third incremental data and the first real-time pose information.
24. The method of claim 23, wherein the method further comprises:
and sending the third incremental data to first user equipment, so that the first user equipment superimposes and presents at least one piece of augmented reality presentation information on the first video stream according to the third incremental data and the first real-time pose information.
25. A method for presenting augmented reality data, applied to a first user device, wherein the method comprises:
In a multi-user video scene of a first user and a second user, positioning a first video stream corresponding to the first user based on a target anchor point, determining first real-time pose information of the first user equipment relative to the target anchor point in each current video frame of the first video stream, and providing the first real-time pose information to the second user equipment so that the second user adds at least one piece of augmented reality presentation information on the first video stream, and enabling the second user equipment to superimpose and present the at least one piece of augmented reality presentation information on the first video stream according to the first real-time pose information;
acquiring first augmented reality data provided by the second user equipment, wherein the first augmented reality data comprises object data information corresponding to the at least one piece of augmented reality presentation information;
and superposing and presenting the at least one piece of augmented reality presentation information on the first video stream according to the first augmented reality data and the first real-time pose information.
26. The method of claim 25, wherein the obtaining the first augmented reality data provided by the second user device comprises:
Acquiring first augmented reality data provided by the second user equipment and adding position information of the at least one piece of augmented reality presentation information in a first video stream;
according to the added position information, positioning the first video stream based on the target anchor point, determining first position information of the at least one augmented reality presentation information relative to the target anchor point, and updating the first augmented reality data so that the first position information is included in the object data information;
wherein the method further comprises:
and providing the updated first augmented reality data to the second user equipment, so that the second user equipment superimposes and presents the at least one piece of augmented reality presentation information on the first video stream according to the first real-time pose information and the updated first augmented reality data.
27. The method of claim 26, wherein the method further comprises:
and providing the updated first augmented reality data to third user equipment, wherein the third user is not in the multi-user video scene, the target anchor point is identified in the camera live-action picture according to the updated first augmented reality data in the camera live-action picture of the third user equipment, and the at least one augmented reality presentation information is presented in a superposition mode on the camera live-action picture.
28. The method of claim 25, wherein the method further comprises:
providing the anchor point data information corresponding to the target anchor point to the second user equipment, so that the second user equipment performs positioning operation on the first video stream according to the anchor point data information, and determines first position information of the at least one augmented reality presentation information relative to the target anchor point;
and receiving first augmented reality data provided by the second user equipment, wherein the first augmented reality data comprises object data information corresponding to the at least one piece of augmented reality presentation information, and the object data information comprises the first position information.
29. A method for presenting augmented reality data, applied to a second user device, wherein the method comprises:
acquiring first real-time pose information, provided by first user equipment, of a target anchor point in each current video frame of a first video stream corresponding to a first user, of the first user equipment in a multi-user video scene of the first user and a second user;
obtaining first augmented reality data corresponding to at least one piece of augmented reality presentation information added by the second user on the first video stream, wherein the first augmented reality data comprises object data information corresponding to the at least one piece of augmented reality presentation information, and the object data information comprises first position information of the at least one piece of augmented reality presentation information relative to the target anchor point;
Determining presentation position information of the at least one piece of augmented reality presentation information on a current video frame of the first video stream according to the first augmented reality data and the first real-time pose information;
and superposing and presenting the at least one augmented reality presentation information on the current video frame of the first video stream according to the presentation position information.
30. The method of claim 29, wherein the obtaining first augmented reality data corresponding to at least one augmented reality presentation information added by the second user on the first video stream comprises:
determining first augmented reality data corresponding to at least one piece of augmented reality presentation information and adding position information of the at least one piece of augmented reality presentation information in a first video stream in response to an adding operation performed by the second user on the first video stream, wherein the first augmented reality data comprises object data information corresponding to the at least one piece of augmented reality presentation information;
providing the first augmented reality data and the added position information to the first user equipment, so that the first user equipment performs positioning operation on the first video stream based on the target anchor point according to the added position information, determines first position information of the at least one augmented reality presentation information relative to the target anchor point, and updates the first augmented reality data so that the first position information is included in the object data information;
And receiving updated first augmented reality data provided by the first user equipment.
31. The method of claim 29, wherein the method further comprises:
acquiring anchor point data information corresponding to the target anchor point provided by the first user equipment;
wherein the obtaining the first augmented reality data corresponding to the at least one augmented reality presentation information added by the second user on the first video stream includes:
determining first augmented reality data corresponding to at least one piece of augmented reality presentation information in response to an adding operation performed by the second user on the first video stream, wherein the first augmented reality data comprises object data information corresponding to the at least one piece of augmented reality presentation information;
positioning the first video stream according to the anchor point data information, and determining first position information of the at least one augmented reality presentation information relative to the target anchor point so that the object data information comprises the first position information;
and providing the first augmented reality data to the first user equipment, so that the first user equipment superimposes and presents the at least one piece of augmented reality presentation information on the first video stream according to the first augmented reality data and the first real-time pose information.
32. The method of claim 31, wherein the method further comprises:
and providing the first augmented reality data to third user equipment, wherein the third user is not in the multi-person video scene, identifying the target anchor point in a camera live-action picture of the third user equipment according to the first augmented reality data, and overlaying and presenting the at least one augmented reality presentation information on the camera live-action picture.
33. A computer device for presenting augmented reality data, comprising a memory, a processor and a computer program stored on the memory, characterized in that the processor executes the computer program to implement the steps of the method of any one of claims 1 to 32.
34. A computer readable storage medium having stored thereon a computer program/instruction which when executed by a processor performs the steps of the method according to any of claims 1 to 32.
35. A computer program product comprising a computer program which, when executed by a processor, implements the steps of the method according to any one of claims 1 to 32.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310674233.9A CN116664806A (en) | 2023-06-07 | 2023-06-07 | Method, device and medium for presenting augmented reality data |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310674233.9A CN116664806A (en) | 2023-06-07 | 2023-06-07 | Method, device and medium for presenting augmented reality data |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116664806A true CN116664806A (en) | 2023-08-29 |
Family
ID=87723990
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310674233.9A Pending CN116664806A (en) | 2023-06-07 | 2023-06-07 | Method, device and medium for presenting augmented reality data |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116664806A (en) |
-
2023
- 2023-06-07 CN CN202310674233.9A patent/CN116664806A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20200364937A1 (en) | System-adaptive augmented reality | |
KR102534637B1 (en) | augmented reality system | |
EP4394554A1 (en) | Method for determining and presenting target mark information and apparatus | |
US9613463B2 (en) | Augmented reality extrapolation techniques | |
US20190311544A1 (en) | Image processing for augmented reality | |
CN111414225B (en) | Three-dimensional model remote display method, first terminal, electronic device and storage medium | |
CN106846497B (en) | Method and device for presenting three-dimensional map applied to terminal | |
CN111311756B (en) | Augmented reality AR display method and related device | |
CN114119839A (en) | Three-dimensional model reconstruction and image generation method, equipment and storage medium | |
CN106575158B (en) | Environment mapping virtualization mechanism | |
TWI783472B (en) | Ar scene content generation method, display method, electronic equipment and computer readable storage medium | |
CN110751735B (en) | Remote guidance method and device based on augmented reality | |
US20170186243A1 (en) | Video Image Processing Method and Electronic Device Based on the Virtual Reality | |
CN109584377B (en) | Method and device for presenting augmented reality content | |
CN109656363B (en) | Method and equipment for setting enhanced interactive content | |
US11561651B2 (en) | Virtual paintbrush implementing method and apparatus, and computer readable storage medium | |
CN114531553B (en) | Method, device, electronic equipment and storage medium for generating special effect video | |
CN113965773A (en) | Live broadcast display method and device, storage medium and electronic equipment | |
CN110288523B (en) | Image generation method and device | |
CN109816791B (en) | Method and apparatus for generating information | |
CN109669541B (en) | Method and equipment for configuring augmented reality content | |
KR102176805B1 (en) | System and method for providing virtual reality contents indicated view direction | |
CN116664806A (en) | Method, device and medium for presenting augmented reality data | |
CN116684540A (en) | Method, device and medium for presenting augmented reality data | |
CN114004953A (en) | Method and system for realizing reality enhancement picture and cloud server |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |