CN114047824A - Method for interaction of multiple terminal users in virtual space - Google Patents

Method for interaction of multiple terminal users in virtual space Download PDF

Info

Publication number
CN114047824A
CN114047824A CN202210034231.9A CN202210034231A CN114047824A CN 114047824 A CN114047824 A CN 114047824A CN 202210034231 A CN202210034231 A CN 202210034231A CN 114047824 A CN114047824 A CN 114047824A
Authority
CN
China
Prior art keywords
user
virtual space
virtual
login
equipment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210034231.9A
Other languages
Chinese (zh)
Inventor
蒋丽娟
郑思遥
刘怀洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Seengene Technology Co ltd
Original Assignee
Beijing Seengene Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Seengene Technology Co ltd filed Critical Beijing Seengene Technology Co ltd
Priority to CN202210034231.9A priority Critical patent/CN114047824A/en
Publication of CN114047824A publication Critical patent/CN114047824A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/012Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment

Abstract

The application relates to a method and equipment for interaction of multiple terminal users in a virtual space, wherein the method comprises the following steps: after the user logs in the pre-constructed virtual space, an avatar of the user in the virtual space is created. And then judging the login mode of the user, if the user logs in based on the local login equipment, displaying the virtual space in an augmented reality form based on the local login equipment, wherein the displayed content at least comprises: the virtual scene element and the virtual image of the user who logs in the virtual space based on the remote login equipment; if the user logs in based on the remote login equipment, the virtual space is displayed in a virtual reality form based on the remote login equipment, and the display content at least comprises the following steps: virtual scene elements and avatars of all users logged into the virtual space. And receiving an interactive instruction sent by a user to the interactive object in the virtual space, and executing the interactive instruction. According to the method and the device, interaction between the user in the virtual scene of remote login and the user in the real scene of local login is achieved.

Description

Method for interaction of multiple terminal users in virtual space
Technical Field
The application relates to the technical field of virtual reality, in particular to a method for interaction of multiple terminal users in a virtual space.
Background
Augmented Reality (AR) technology is a technology in which virtual information is superimposed in a real space, and a user can see corresponding information at a specific position in a real-world space through an AR device (e.g., a mobile phone with an AR function, a tablet, AR glasses, and the like). For example, the information related to the 3D representation of the exhibit can be seen on the exhibit in a museum through the AR device; the scenic spot related introduction information of the 3D expression form can be seen through the AR equipment in the scenic spot; the method comprises the following steps that under the industrial environment, the AR equipment can be used for checking the relevant reading information of the 3D expression type equipment, and the running state inside the equipment can be intuitively known; in the building industry, the BIM information and the sensor big data are combined, and the structure information and the like in the building can be seen through the AR equipment. While Virtual Reality (VR) technology can construct a purely Virtual scene, and a user can interact with scene elements in the purely Virtual scene through VR equipment, but the VR technology cannot be associated with the real world, and in summary, in the prior art, a user in a Virtual scene cannot interact with a user in a real scene.
Disclosure of Invention
In order to overcome the problem that a user in a virtual scene cannot interact with a user in a real scene in the related art at least to a certain extent, the application provides a method for interaction of multiple terminal users in a virtual space.
The scheme of the application is as follows:
according to a first aspect of embodiments of the present application, a method for interaction in a virtual space by multiple end users is provided, including:
after a user logs in a pre-constructed virtual space, creating an avatar of the user in the virtual space; the virtual space is constructed according to a real scene where the local login equipment is located;
judging the login mode of the user;
if the user logs in based on the local login device, displaying the virtual space in an augmented reality form based on the local login device, wherein the displaying content at least comprises: a virtual scene element and an avatar of a user logged into the virtual space based on a remote login device;
if the user logs in based on the remote login equipment, displaying the virtual space in a virtual reality form based on the remote login equipment, wherein the displaying content at least comprises the following steps: virtual scene elements and avatars of all users logged into the virtual space;
receiving an interactive instruction sent by the user to an interactive object in the virtual space; the interactive object includes at least: virtual scene elements and avatars of other users logged into the virtual space;
and executing the interactive instruction.
Preferably, in an implementation manner of the present application, the method further includes:
acquiring image data of a real scene where the local login equipment is located;
generating a virtual map according to the image data of the real scene where the local login equipment is located;
receiving an operation instruction input in the virtual map by a developer;
and building the virtual space on the basis of the virtual map according to the operation instruction.
Preferably, in an implementation manner of the present application, the method further includes:
acquiring image data of the user based on the login equipment of the user;
visually positioning the login equipment of the user according to the image data of the user and the virtual map;
and aligning the coordinate systems of all the users with the coordinate system of the virtual space.
Preferably, in an implementation manner of the present application, the method further includes:
and synchronously moving the virtual image of the user in the virtual space to execute corresponding actions according to the image data of the user.
Preferably, in an implementable manner of the present application, the interactive instructions include at least:
voice interaction instructions and action interaction instructions.
Preferably, in an implementation manner of the present application, the method further includes:
and if the user logs in based on the remote login equipment, displaying a first person avatar or a third person avatar corresponding to the user through the remote login equipment based on a visual angle mode selected by the user.
According to a second aspect of the embodiments of the present application, there is provided an apparatus for multi-end user interaction in a virtual space, including:
a processor and a memory;
the processor and the memory are connected through a communication bus:
the processor is used for calling and executing the program stored in the memory;
the memory for storing a program for at least performing a method of multi-end user interaction in a virtual space as claimed in any one of the above.
The technical scheme provided by the application can comprise the following beneficial effects: the method for interaction of multiple terminal users in the virtual space comprises the following steps: after the user logs in the pre-constructed virtual space, an avatar of the user in the virtual space is created. And then judging the login mode of the user, if the user logs in based on the local login equipment, displaying the virtual space in an augmented reality form based on the local login equipment, wherein the displayed content at least comprises: the virtual scene element and the virtual image of the user who logs in the virtual space based on the remote login equipment; if the user logs in based on the remote login equipment, the virtual space is displayed in a virtual reality form based on the remote login equipment, and the display content at least comprises the following steps: virtual scene elements and avatars of all users logged into the virtual space. Because the virtual space is constructed according to the scene of the local login equipment, the user who logs in locally can not only see the real images of other local login users in the virtual space, but also see the virtual image of a remote login user in the virtual space; and the remote login user can see the avatar of all users logged into the virtual space. The method and the device also receive an interactive instruction sent by a user to the interactive object in the virtual space, and execute the interactive instruction. The interactive object includes at least: the virtual scene elements in the virtual space and the virtual images of other users logging in the virtual space, namely, the users can interact with the virtual scene elements in the virtual space and other users logging in the virtual space.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application.
FIG. 1 is a flowchart illustrating a method for multi-end user interaction in a virtual space according to an embodiment of the present application;
FIG. 2 is a diagram illustrating an exemplary embodiment of a method for interaction between multiple end users in a virtual space according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of an apparatus for interaction between multiple end users in a virtual space according to an embodiment of the present application.
Reference numerals: a processor-21; a memory-22.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
Example one
A method for multi-end user interaction in a virtual space, referring to fig. 1, includes:
s11: after a user logs in a pre-constructed virtual space, creating an avatar of the user in the virtual space; the virtual space is constructed according to a scene where the local login equipment is located;
the virtual space in this embodiment is constructed based on a pure visual three-dimensional reconstruction process, which includes the following specific processes:
acquiring image data of a real scene where local login equipment is located;
generating a virtual map according to image data of a real scene where the local login equipment is located;
receiving an operation instruction input in the virtual map by a developer;
and constructing a virtual space on the basis of the virtual map according to the operation instruction.
Preferably, the image data of the scene where the local login device is located is a panoramic image of the scene where the local login device is located, and the refining process of generating the virtual map according to the image data of the real scene where the local login device is located is as follows:
extracting local features and global features of the panoramic image;
matching the panoramic image through global characteristics;
performing incremental structure from motion three-dimensional reconstruction (sparse reconstruction) on the panoramic image;
performing dense reconstruction on the basis of the sparse reconstruction to obtain dense point cloud;
performing gridding reconstruction on the dense point cloud to obtain a grid;
and (4) mapping the grid through the original image and the reconstructed camera pose to obtain a grid model with a mapping.
Two products were obtained by mapping: sparse point cloud maps and mesh models with maps. The sparse point cloud map is used for positioning, and comprises data of 3D points, key frames, characteristic points of the key frames and the like for describing a space, and a visual positioning algorithm can perform high-precision six-degree-of-freedom positioning by using the sparse point cloud map. The mapped mesh model is a model for referring to a corresponding real space position for a developer. The coordinate systems of the two maps are unified. Therefore, the coordinate information obtained by the developer against the virtual information placed by the mesh model may also correspond in the real-world space.
When a developer creates contents, the developer can arrange the contents of placing the virtual information in a coordinate system of the grid model according to the grid model map as a reference, and perform 3D space interactive development according to the same process as that of common 3D application development.
S12: judging a login mode of a user;
the login modes of the user at least comprise two types: local login equipment login and remote login equipment login.
The local login device is located in a real scene, and the virtual scene is pre-constructed on the basis of the real scene where the local login device is located. The user experiences in a real scene with a local login device, either hand-held or worn.
The remote login device can be various devices which can remotely access the virtual scene, such as a mobile phone, a computer, a VR and Mixed Reality (MR) head display, and the like. The user may experience at home with a handheld or worn remote entry device.
S13: if the user logs in based on the local login device, displaying the virtual space in an augmented reality form based on the local login device, wherein the display content at least comprises: the virtual scene element and the virtual image of the user who logs in the virtual space based on the remote login equipment;
s14: if the user logs in based on the remote login equipment, the virtual space is displayed in a virtual reality form based on the remote login equipment, and the display content at least comprises the following steps: virtual scene elements and avatars of all users logged into the virtual space;
augmented reality (ar) technology, and virtual reality (vr) technology.
That is, in this embodiment, if the user logs in based on the local login device, the virtual space is displayed to the user in the AR form based on the local login device, and the user can see the real image of the user, the current real scene, the real images of other users in the current real scene, the virtual scene element in the virtual space, and the virtual image of the user who logs in the virtual space through the remote login device. If the user logs in based on the remote login equipment, the virtual space is displayed to the user in a VR form based on the remote login equipment, and the user can see the own virtual image, the virtual scene elements in the virtual space and the virtual images of all the users logging in the virtual space.
The virtual scene elements are the virtual information contents arranged and placed in the coordinate system of the grid model by the developer, such as virtual boxes, virtual plants, virtual animals and the like placed in a virtual space.
S15: receiving an interactive instruction sent by a user to an interactive object in a virtual space; the interactive object includes at least: virtual scene elements in the virtual space and avatars of other users logged into the virtual space;
the interactive instructions at least comprise: voice interaction instructions and action interaction instructions.
In this embodiment, the user logged in based on the local login device and the user logged in based on the remote login device can see each other, and can interact with each other in a mode of screen clicking, touch gesture, handle ray of the MR head display, gesture recognition and the like.
Supportable interactions with other users in the virtual space include, but are not limited to:
voice, text, video chat;
the mutual action is as follows: such as a handle, a catch, a hug, etc
And (5) combining in a virtual space.
Supportable interactions with virtual scene elements in the virtual space include, but are not limited to:
clicking a button trigger event in a scene;
clicking a video in a scene to control the video to be played;
and clicking the model in the scene to control the change of the state of the model in the scene.
S16: and executing the interactive instruction.
The following is specifically described by taking fig. 2 as an example:
the left side of fig. 2 shows an offline scene accessed by a user logged in by the local login device, which has a coordinate system xyz, and the right side of fig. 2 shows an online scene accessed by a user logged in by the remote login device, which has a coordinate system x ' y ' z ', which correspond to each other. U1, U2, U3 are users logged in by local login devices, hereinafter referred to as local users, and U4 and U5 are users logged in by remote login devices, hereinafter referred to as remote users.
In this example, local users U1, U2, U3 are experiencing by logging into the virtual space through local login devices. Remote users U4 and U5 log into the virtual space elsewhere through remote login devices, controlling avatars U4 and U5 to roam through the scene. A square box in a scene is a virtual scene element superimposed in the scene. On the left side of fig. 2, local users U1, U2, U3 can see the avatars U4 'and U5' in the scene by remote users U4 and U5 through local login devices, and can also see the virtual scene element square boxes. The remote user U4 can see the avatars U1', U2' and U3' of the local users U1, U2, U3 at corresponding locations in the virtual scene on the right side of fig. 2, can also see the avatars of another remote user U5, and can also see the virtual scene element square box.
The method for interaction of multiple terminal users in the virtual space in the embodiment comprises the following steps: after the user logs in the pre-constructed virtual space, an avatar of the user in the virtual space is created. And then judging the login mode of the user, if the user logs in based on the local login equipment, displaying the virtual space in an augmented reality form based on the local login equipment, wherein the displayed content at least comprises: the virtual scene element and the virtual image of the user who logs in the virtual space based on the remote login equipment; if the user logs in based on the remote login equipment, the virtual space is displayed in a virtual reality form based on the remote login equipment, and the display content at least comprises the following steps: virtual scene elements and avatars of all users logged into the virtual space. Because the virtual space is constructed according to the scene of the local login equipment, the user who logs in locally can not only see the real images of other local login users in the virtual space, but also see the virtual image of a remote login user in the virtual space; and the remote login user can see the avatar of all users logged into the virtual space. In this embodiment, an interactive instruction sent by a user to an interactive object in a virtual space is also received, and the interactive instruction is executed. The interactive object includes at least: the virtual scene elements in the virtual space and the avatars of other users logged into the virtual space, that is, the user can interact with the virtual scene elements in the virtual space and other users logged into the virtual space.
Example two
The method for interaction of multiple end users in the virtual space in some embodiments further comprises:
acquiring image data of a user based on login equipment of the user;
visually positioning the login equipment of the user according to the image data and the virtual map of the user;
and aligning the coordinate systems of all the users with the coordinate system of the virtual space.
Preferably, in this embodiment, the image data of the user is acquired in real time based on a login device of the user.
The user login device needs to be configured with an image acquisition device such as a camera and the like for acquiring image information of the user.
In the embodiment, after a user logs in a virtual space, image data of the user is acquired based on login equipment of the user, a server can perform visual positioning on the login equipment of the user according to the image data of the user and a virtual map for constructing the virtual space, after the login equipment of the user obtains high-precision positioning, the login equipment can be aligned with coordinate systems of other users and coordinate systems of the virtual space, and after the login equipment of the user is aligned, a local user can be in the same virtual space with a remote user for interaction.
Furthermore, in the implementation, the avatar of the user is synchronously moved in the virtual space to execute the corresponding action according to the image data of the user, so that the user can drive the avatar in the virtual space to act through the action of the body, and the user can freely roam in the virtual space.
EXAMPLE III
The method for interaction of multiple end users in the virtual space in some embodiments further comprises:
and if the user logs in based on the remote login equipment, displaying the first person avatar or the third person avatar corresponding to the user through the remote login equipment based on the visual angle mode selected by the user.
In this embodiment, if the user logs in based on the local login device, the user appears in the virtual space in the form of an entity image. If the user logs in based on the remote login equipment, the user appears in the virtual space in the form of an avatar, and at the moment, the user can select the own visual angle. The selectable views include: a first person perspective and a third person perspective. In this embodiment, based on the view angle mode selected by the user, the first person avatar or the third person avatar corresponding to the user is displayed to the user through the remote login device.
Example four
An apparatus for multi-end user interaction in a virtual space, referring to fig. 3, includes:
a processor 21 and a memory 22;
the processor 21 is connected to the memory 22 by a communication bus:
the processor 21 is configured to call and execute a program stored in the memory 22;
a memory 22 for storing a program for performing at least one method of multi-end user interaction in a virtual space as in any of the above embodiments.
It is understood that the same or similar parts in the above embodiments may be mutually referred to, and the same or similar parts in other embodiments may be referred to for the content which is not described in detail in some embodiments.
It should be noted that, in the description of the present application, the terms "first", "second", etc. are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. Further, in the description of the present application, the meaning of "a plurality" means at least two unless otherwise specified.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and the scope of the preferred embodiments of the present application includes other implementations in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present application.
It should be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present application may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc.
In the description herein, reference to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
Although embodiments of the present application have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present application, and that variations, modifications, substitutions and alterations may be made to the above embodiments by those of ordinary skill in the art within the scope of the present application.

Claims (7)

1. A method for interaction of multiple terminal users in a virtual space is characterized by comprising the following steps:
after a user logs in a pre-constructed virtual space, creating an avatar of the user in the virtual space; the virtual space is constructed according to a real scene where the local login equipment is located;
judging the login mode of the user;
if the user logs in based on the local login device, displaying the virtual space in an augmented reality form based on the local login device, wherein the displaying content at least comprises: a virtual scene element and an avatar of a user logged into the virtual space based on a remote login device;
if the user logs in based on the remote login equipment, displaying the virtual space in a virtual reality form based on the remote login equipment, wherein the displaying content at least comprises the following steps: virtual scene elements and avatars of all users logged into the virtual space;
receiving an interactive instruction sent by the user to an interactive object in the virtual space; the interactive object includes at least: virtual scene elements and avatars of other users logged into the virtual space;
and executing the interactive instruction.
2. The method of claim 1, further comprising:
acquiring image data of a real scene where the local login equipment is located;
generating a virtual map according to the image data of the real scene where the local login equipment is located;
receiving an operation instruction input in the virtual map by a developer;
and building the virtual space on the basis of the virtual map according to the operation instruction.
3. The method of claim 2, further comprising:
acquiring image data of the user based on the login equipment of the user;
visually positioning the login equipment of the user according to the image data of the user and the virtual map;
and aligning the coordinate systems of all the users with the coordinate system of the virtual space.
4. The method of claim 3, further comprising:
and synchronously moving the virtual image of the user in the virtual space to execute corresponding actions according to the image data of the user.
5. The method according to claim 1, characterized in that the interaction instructions comprise at least:
voice interaction instructions and action interaction instructions.
6. The method of claim 1, further comprising:
and if the user logs in based on the remote login equipment, displaying a first person avatar or a third person avatar corresponding to the user through the remote login equipment based on a visual angle mode selected by the user.
7. An apparatus for multi-end user interaction in a virtual space, comprising:
a processor and a memory;
the processor and the memory are connected through a communication bus:
the processor is used for calling and executing the program stored in the memory;
the memory for storing a program for at least performing a method of multi-end user interaction in a virtual space as claimed in any one of claims 1 to 6.
CN202210034231.9A 2022-01-13 2022-01-13 Method for interaction of multiple terminal users in virtual space Pending CN114047824A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210034231.9A CN114047824A (en) 2022-01-13 2022-01-13 Method for interaction of multiple terminal users in virtual space

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210034231.9A CN114047824A (en) 2022-01-13 2022-01-13 Method for interaction of multiple terminal users in virtual space

Publications (1)

Publication Number Publication Date
CN114047824A true CN114047824A (en) 2022-02-15

Family

ID=80196414

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210034231.9A Pending CN114047824A (en) 2022-01-13 2022-01-13 Method for interaction of multiple terminal users in virtual space

Country Status (1)

Country Link
CN (1) CN114047824A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114385934A (en) * 2022-03-23 2022-04-22 北京悉见科技有限公司 System for jointly inquiring multiple AR maps
CN114627270A (en) * 2022-03-16 2022-06-14 深圳市博乐信息技术有限公司 Virtual space sharing method and system based on AR/VR technology
CN114650265A (en) * 2022-02-16 2022-06-21 浙江毫微米科技有限公司 Information processing method, information processing device, electronic equipment and storage medium
CN114926614A (en) * 2022-07-14 2022-08-19 北京奇岱松科技有限公司 Information interaction system based on virtual world and real world
WO2023155394A1 (en) * 2022-02-18 2023-08-24 深圳市慧鲤科技有限公司 Virtual space fusion method and related apparatus, electronic device, medium, and program

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008106196A1 (en) * 2007-03-01 2008-09-04 Sony Computer Entertainment America Inc. Virtual world avatar control, interactivity and communication interactive messaging
CN105915849A (en) * 2016-05-09 2016-08-31 惠州Tcl移动通信有限公司 Virtual reality sports event play method and system
CN108200010A (en) * 2017-12-11 2018-06-22 机械工业第六设计研究院有限公司 The data interactive method of virtual scene and real scene, device, terminal and system
US20190099678A1 (en) * 2017-09-29 2019-04-04 Sony Interactive Entertainment America Llc Virtual Reality Presentation of Real World Space
CN111526118A (en) * 2019-10-29 2020-08-11 南京翱翔信息物理融合创新研究院有限公司 Remote operation guiding system and method based on mixed reality
CN112492231A (en) * 2020-11-02 2021-03-12 重庆创通联智物联网有限公司 Remote interaction method, device, electronic equipment and computer readable storage medium
CN113099204A (en) * 2021-04-13 2021-07-09 北京航空航天大学青岛研究院 Remote live-action augmented reality method based on VR head-mounted display equipment
CN113313840A (en) * 2021-06-15 2021-08-27 周永奇 Real-time virtual system and real-time virtual interaction method

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008106196A1 (en) * 2007-03-01 2008-09-04 Sony Computer Entertainment America Inc. Virtual world avatar control, interactivity and communication interactive messaging
CN105915849A (en) * 2016-05-09 2016-08-31 惠州Tcl移动通信有限公司 Virtual reality sports event play method and system
US20190099678A1 (en) * 2017-09-29 2019-04-04 Sony Interactive Entertainment America Llc Virtual Reality Presentation of Real World Space
CN108200010A (en) * 2017-12-11 2018-06-22 机械工业第六设计研究院有限公司 The data interactive method of virtual scene and real scene, device, terminal and system
CN111526118A (en) * 2019-10-29 2020-08-11 南京翱翔信息物理融合创新研究院有限公司 Remote operation guiding system and method based on mixed reality
CN112492231A (en) * 2020-11-02 2021-03-12 重庆创通联智物联网有限公司 Remote interaction method, device, electronic equipment and computer readable storage medium
CN113099204A (en) * 2021-04-13 2021-07-09 北京航空航天大学青岛研究院 Remote live-action augmented reality method based on VR head-mounted display equipment
CN113313840A (en) * 2021-06-15 2021-08-27 周永奇 Real-time virtual system and real-time virtual interaction method

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114650265A (en) * 2022-02-16 2022-06-21 浙江毫微米科技有限公司 Information processing method, information processing device, electronic equipment and storage medium
CN114650265B (en) * 2022-02-16 2024-02-09 浙江毫微米科技有限公司 Information processing method, information processing device, electronic equipment and storage medium
WO2023155394A1 (en) * 2022-02-18 2023-08-24 深圳市慧鲤科技有限公司 Virtual space fusion method and related apparatus, electronic device, medium, and program
CN114627270A (en) * 2022-03-16 2022-06-14 深圳市博乐信息技术有限公司 Virtual space sharing method and system based on AR/VR technology
CN114385934A (en) * 2022-03-23 2022-04-22 北京悉见科技有限公司 System for jointly inquiring multiple AR maps
CN114926614A (en) * 2022-07-14 2022-08-19 北京奇岱松科技有限公司 Information interaction system based on virtual world and real world
CN114926614B (en) * 2022-07-14 2022-10-25 北京奇岱松科技有限公司 Information interaction system based on virtual world and real world

Similar Documents

Publication Publication Date Title
CN114047824A (en) Method for interaction of multiple terminal users in virtual space
US11688118B2 (en) Time-dependent client inactivity indicia in a multi-user animation environment
US11494995B2 (en) Systems and methods for virtual and augmented reality
CN108027653B (en) Haptic interaction in virtual environments
US9654734B1 (en) Virtual conference room
CN105981076B (en) Synthesize the construction of augmented reality environment
US20170186219A1 (en) Method for 360-degree panoramic display, display module and mobile terminal
CN110473293B (en) Virtual object processing method and device, storage medium and electronic equipment
US11782272B2 (en) Virtual reality interaction method, device and system
CN111643899A (en) Virtual article display method and device, electronic equipment and storage medium
CN116057577A (en) Map for augmented reality
CN110544315B (en) Virtual object control method and related equipment
CN112039937A (en) Display method, position determination method and device
CN115439634B (en) Interactive presentation method of point cloud data and storage medium
JP2023171298A (en) Adaptation of space and content for augmented reality and composite reality
JP2024512447A (en) Data generation method, device and electronic equipment
CN114900743A (en) Scene rendering transition method and system based on video plug flow
CN115131528A (en) Virtual reality scene determination method, device and system
Ishida et al. Proposal of tele-immersion system by the fusion of virtual space and real space
CN115861500B (en) 2D model collision body generation method and device
CN113377263B (en) Court virtual evidence demonstrating method
CN117036460A (en) Method, device, equipment, medium and program for calibrating object in space
CN114385879A (en) Data display method and device, readable storage medium and electronic equipment
CN117788760A (en) Method and device for managing security zone of virtual scene
CN111679806A (en) Play control method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20220215

RJ01 Rejection of invention patent application after publication