CN117170504B - Method, system and storage medium for viewing with person in virtual character interaction scene - Google Patents

Method, system and storage medium for viewing with person in virtual character interaction scene Download PDF

Info

Publication number
CN117170504B
CN117170504B CN202311439200.2A CN202311439200A CN117170504B CN 117170504 B CN117170504 B CN 117170504B CN 202311439200 A CN202311439200 A CN 202311439200A CN 117170504 B CN117170504 B CN 117170504B
Authority
CN
China
Prior art keywords
user
client
coordinates
following
tape
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311439200.2A
Other languages
Chinese (zh)
Other versions
CN117170504A (en
Inventor
林红
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Weisaike Network Technology Co ltd
Original Assignee
Nanjing Weisaike Network Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Weisaike Network Technology Co ltd filed Critical Nanjing Weisaike Network Technology Co ltd
Priority to CN202311439200.2A priority Critical patent/CN117170504B/en
Publication of CN117170504A publication Critical patent/CN117170504A/en
Application granted granted Critical
Publication of CN117170504B publication Critical patent/CN117170504B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention discloses a method, a system and a storage medium for viewing with a person in a virtual character interaction scene, belonging to the technical field of virtual and reality, wherein the method comprises the following steps: s1, a first client side obtains a following instruction of a user A, sends a following message to a second client side, and a user B enters a following state; step S2, the client sends the position coordinates of the user A to the client II in real time, and the user B automatically moves towards the position coordinates; s3, calculating the distance between the user B and the user A in real time, and enabling the user B to be separated from a following state when the distance is smaller than a preset shortest distance; step S4, the first client receives the tape-looking instruction of the user A, sends a tape-looking message to the second client, and the user B enters a tape-looking state; and S5, calculating coordinates of a visual angle target of the user A, wherein a camera of the user B faces the visual angle target. The invention uses the synchronous movement and the synchronous visual angle between the user A and the user B to make the interaction between the users more real.

Description

Method, system and storage medium for viewing with person in virtual character interaction scene
Technical Field
The invention relates to the technical field of virtual and reality, in particular to a method, a system and a storage medium for viewing with people in a virtual character interaction scene.
Background
Virtual and reality are a new practical technology developed in the 20 th century. VR virtual digital person-VR virtual digital person is a technology based on the above technology that truly presents a VR character model in a computer, a series of avatars created by artificial intelligence, virtual reality technology, and advanced technology, that are generated by one or more computers and that incorporate the comprehensive presentation of human activity processes and information of the data and features of the avatar. The virtual digital person can enable people to communicate with a true person equally through the digital image; the interaction between the virtual image and the real world can be completed through the interaction mode; has more pleasant smell.
At present, the virtual and real technology is applied to various fields, and under different scenes, the interaction requirements among users are different, for example, in some scenes needing to be guided, when the user is explaining the exhibits, the user needs to move to various exhibition stand positions continuously, the conventional interaction form is to watch the exhibits through the free following of the user, but the user can not follow the exhibits timely, so that the user only hears the voice and can not see the exhibits, or when the exhibits are more, the situation that the view angle of the user to be led can not accurately locate the exhibits introduced at present occurs, therefore, the conventional interaction mode actively followed by the user is poor in interaction effect and not realistic enough.
Disclosure of Invention
The invention aims to solve the problem of poor interaction effect between other users and a tour guide in a virtual scene of a tour guide mobile explanation exhibit, and provides a method, a system and a storage medium for viewing with people in the virtual character interaction scene.
In a first aspect, the present invention achieves the above object by a method for viewing with a person in a virtual character interaction scene, defining a user a of a first client as a person to be followed and a user B of a second client as a person to be followed, the method comprising the steps of:
step S1, a first client obtains a following instruction of a user A and sends a following message to a second client, wherein the following message is used for the second client to control the user B to enter a following state;
step S2, a client sends the position coordinates of the user A to the client II in real time, and the client II controls the user B to automatically move towards the position coordinates;
step S3, a second client calculates the distance between the user B and the user A in real time, when the distance is smaller than the preset shortest distance, the second client controls the user B to be separated from a following state, and when the distance is larger than the preset shortest distance, the step S2 and the step S3 are repeatedly executed;
step S4, the first client receives a tape-looking instruction of the user A and sends a tape-looking message to the second client, wherein the tape-looking message is used for the second client to control the user B to enter a tape-looking state;
and S5, calculating coordinates of the visual angle target of the user A, and sending the coordinates to the client-side II, wherein the coordinates are used for controlling a camera of the user B to face the visual angle target by the client-side II.
Preferably, the method for controlling the user B to automatically move towards the position coordinates by the second client comprises:
and the second client receives the following message and bakes out a Navmesh grid on the ground of the current virtual scene, wherein the Navmesh grid is used for controlling the user B to automatically move according to the shortest path between the user B and the target point, and the input of the target point is set as the position coordinate of the user A.
Preferably, the preset shortest distance is in a length range of 0.1-1m, and the length unit of the shortest distance is a length unit set in the virtual scene.
Preferably, the method for calculating coordinates of the view angle object of the user a includes:
calculating coordinates of diagonal crossing points of a virtual scene display interface;
transmitting a ray to the coordinate by taking a camera of the user A as a starting point;
obtaining an object collided on the rays through collision detection as a visual angle target;
and acquiring coordinates of a collision part of the image pick-up and the surface of the object as coordinates of a visual angle target.
Preferably, the method further comprises the step of closing the operation function of receiving the movement instruction by the user B when the user B enters the following state.
Preferably, the method further comprises the step of closing the operation function of receiving the rotation visual angle instruction by the user B when the user B enters the belt-view state.
In a second aspect, the present invention achieves the above object by a system for viewing by a person in a virtual character interaction scene, the system comprising:
the following starting unit is used for the first client to acquire the following instruction of the user A and send a following message to the second client, wherein the following message is used for the second client to control the user B to enter a following state;
the mobile control unit is used for sending the position coordinates of the user A to the client II in real time by the client II, and the client II controls the user B to automatically move towards the position coordinates;
the following ending unit is used for calculating the distance between the user B and the user A in real time by the second client, controlling the user B to be separated from a following state by the second client when the distance is smaller than the preset shortest distance, and repeatedly executing the mobile control unit and the following ending unit when the distance is larger than the preset shortest distance;
the system comprises a tape-watching starting unit, a first client and a second client, wherein the tape-watching starting unit is used for receiving a tape-watching instruction of a user A and sending a tape-watching message to the second client, and the tape-watching message is used for controlling the second client to control the user B to enter a tape-watching state;
the visual field synchronization unit is used for calculating the coordinates of the visual angle target of the user A, sending the coordinates to the client-side II, and controlling the camera of the user B in a separation following state to face the visual angle target by the client-side II.
Preferably, the method for controlling the user B to automatically move toward the position coordinates by the second client in the mobile control unit includes:
and the second client receives the following message and bakes out a Navmesh grid on the ground of the current virtual scene, wherein the Navmesh grid is used for controlling the user B to automatically move according to the shortest path between the user B and the target point, and the input of the target point is set as the position coordinate of the user A.
Preferably, the method for calculating the coordinates of the viewing angle target of the user a in the view synchronization unit includes:
calculating coordinates of diagonal crossing points of a virtual scene display interface;
transmitting a ray to the coordinate by taking a camera of the user A as a starting point;
obtaining an object collided on the rays through collision detection as a visual angle target;
and acquiring coordinates of a collision part of the image pick-up and the surface of the object as coordinates of a visual angle target.
In a third aspect, the present invention achieves the above object by a storage medium having stored thereon a computer program which, when executed by a processor, implements the method for carrying out person viewing in a virtual character interaction scene as described in the first aspect.
Compared with the prior art, the invention has the beneficial effects that: according to the method, the user B can move along with the user A in the following state and can enter the watching state when the user B is close to the user A, and the user B can synchronize the view angle of the user A in the watching state, so that the targets seen by the user B are consistent with the view angle of the user A.
Drawings
Fig. 1 is a flow chart of a method of the present invention for viewing with a person in a virtual character interaction scenario.
Fig. 2 is a schematic diagram of the system components of the present invention for viewing with a person in a virtual character interaction scenario.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Example 1
As shown in fig. 1, a method for carrying out person viewing in a virtual character interaction scene defines a user a of a first client as a followed person and a user B of a second client as a followed person, the method comprises the following steps:
step S1, the first client obtains a following instruction of the user a, and sends a following message to the second client, where the following message is used for controlling the second client to enter a following state, the following instruction is started by the user a by clicking a following button on a scene display interface, as shown in fig. 2, a name of the user B, the following state and several buttons for controlling the following are displayed on a ground step of the display interface, when the user a clicks the following button in interaction, the user B receives a message for inviting to follow, the user B enters the following state, and after the user a clicks an end control button, the user B receives an end message and leaves the following state, where the following and leaving of the user B are controlled manually by the user a, and where the following leaving of the user B is triggered by the user B, which is described in detail below. The user B automatically follows the movement of the user a in the following state as mentioned in step S2.
Step S2, a client sends the position coordinates of the user A to the client II in real time, and the client II controls the user B to automatically move towards the position coordinates; in step S1, it is known that the user B moves along with the user a when entering the following state, specifically, the user B moves towards the position coordinate of the user a in real time by synchronizing the position coordinate of the user a to the user B in real time, and from a visual perspective, the user B realizes the effect of following the user a, and the method for controlling the user B to automatically move towards the position coordinate by the second client comprises:
and the second client receives the following message and bakes out a Navmesh grid on the ground of the current virtual scene, wherein the Navmesh grid is used for controlling the user B to automatically move according to the shortest path between the user B and the target point, and the input of the target point is set as the position coordinate of the user A. The Navmesh grid is an automatic path-finding algorithm in 3Dunity, and can automatically calculate an optimal path to find a path.
Step S3, a second client calculates the distance between the user B and the user A in real time, when the distance is smaller than the preset shortest distance, the second client controls the user B to be separated from a following state, and when the distance is larger than the preset shortest distance, the step S2 and the step S3 are repeatedly executed; in order to avoid the user B overlapping the character model of the user A and causing the offensiveness in the view angle of the user, the distance that the user B follows the user A is limited by setting the shortest distance, when the user B is relatively close to the user A, the user B is separated from the following state, the user A moves at the moment, the user B cannot follow the movement, the view angle can be automatically rotated and the small range of movement of the user B can be automatically moved, but once the movement range of the user B becomes larger, the distance between the user B and the user B exceeds the shortest distance again, the step S2 and the step S3 can be executed again by the client of the user B, and the following state is continuously restored to follow the movement of the user A.
Step S4, the first client receives a tape-looking instruction of the user A and sends a tape-looking message to the second client, the tape-looking message is used for controlling the second client to enter a tape-looking state, when the distance between the second client and the user A is smaller than the shortest distance, the first client can only receive the tape-looking instruction of the user A, the purpose of sending the tape-looking message to the second client is achieved, after the second client receives the message and enters the tape-looking state, the visual angle of the second client in the tape-looking state cannot be controlled and moved by the second client, and the visual angle of the second client sees the picture or the scene seen by the second client.
In step S5, the coordinates of the view angle target of the user a are calculated, the coordinates are sent to the second client, the coordinates are used for the second client to control the camera of the user B to face the view angle target, and in step S4, it can be known that the user B enters a view-in state, the view angle of the user a is synchronized to the user B, the method of synchronizing the view angle target of the user a is described in detail, the coordinates are sent to the user B by the user a, and therefore after the camera of the user B inputs the coordinates, the camera can rotate to the position of the coordinates, so that the picture seen by the user B is consistent with the picture seen by the user a. This synchronization method is very selective, and for example, there is a method that the rotation angle of the camera of the user a is synchronized to the user B, so that the user B directly rotates according to the same rotation angle, but this method has too high requirement on the position where the user B stands, because the user B has deviation in the seen content even if the rotation angles of the two cameras are the same once the distance from the user a is too far, and it is mentioned in step S3 that the user B is far from the user a and can be moved freely, so that the position of the user B is not well known, so that although this method can also have the effect of synchronizing the view angle, the accuracy is not as high as that of step S5, so that this method is not generally used.
The shortest distance belongs to preset, the distance range can be set according to the requirement, the preset length range of the shortest distance is 0.1-1m, the length unit of the shortest distance is the length unit set in the virtual scene, the virtual scene is the length unit of the virtual scene, if the area of the exhibition stand is bigger, a plurality of users can be accommodated at one time, the shortest distance can be set to be the maximum value of 1m because of more character models, in order to avoid the superposition of the character models, and conversely, if the area of the exhibition stand is smaller, the shortest distance can be set to be the minimum value of 0.1m in order to avoid the deviation of the synchronous visual angle targets caused by the overlarge distance among the users.
In step S5, we know that the viewing angle deviation between the viewing angle of the user B and the viewing angle of the user a can be ensured to be smaller by using the synchronous viewing angle coordinates, and in order to ensure that the error is minimum, the method for calculating the coordinates of the viewing angle target of the user a includes:
calculating coordinates of diagonal crossing points of a virtual scene display interface, wherein the step is to calculate a center point of a picture of a view angle of a user A;
transmitting a ray to the coordinate by taking a camera of the user A as a starting point, wherein the ray is used for collision detection;
the object collided on the rays is obtained through collision detection and used as a visual angle target, and the object collided by the rays represents the object seen by the current user;
the coordinates of the collision part of the image pick-up and the surface of the object are obtained as the coordinates of the view angle target, the center point of the object is taken as the view angle target, even if the distance is reserved between the position of the user B and the position of the user A, the center of the seen object is the same, the sizes of view angle pictures of the user B and the user A are consistent, the seen content is similar, and the error of the view angle target obtained by the mode is minimum.
In step S3, it is known that, in order to prevent the user B from affecting the automatic movement effect when the user B follows the user a, the user B cannot be manually controlled to move, so that the method further includes turning off the operation function of the user B to receive the movement instruction when the user B enters the following state, and turning off the operation function of the user B to receive the instruction, so that even if the user operates through the movement button on the screen, the user B still moves along the automatic movement path, and the automatic movement effect is not destroyed. In step S4, it is known that the view angle seen by the user B entering the viewing state is the view angle of the user a, and in order to avoid the user from actively shaking the mouse to affect the synchronization of the view angles, the method further includes turning off the operation function of the user B to receive the instruction of rotating the view angle when the user B enters the viewing state, and even if the user operates through the view angle rotating button on the screen, the view seen by the user B is still the view angle view of the user a, so that the effect of synchronizing the view angles is not destroyed.
Example 2
As shown in fig. 2, a system for viewing with a person in a virtual character interaction scene, the system comprising:
the following starting unit is used for the first client to acquire the following instruction of the user A and send a following message to the second client, wherein the following message is used for the second client to control the user B to enter a following state, and when the user B enters the following state, the operation function of the user B for receiving the moving instruction is closed.
And the mobile control unit is used for sending the position coordinates of the user A to the client II in real time by the client II, and controlling the user B to automatically move towards the position coordinates by the client II.
The following ending unit is configured to calculate, in real time, a distance between the user B and the user a by using the second client, where the second client controls the user B to be out of a following state when the distance is smaller than a preset shortest distance, and repeatedly execute a movement control unit and the following ending unit when the distance is greater than the preset shortest distance, where the method for controlling, by the second client, the user B to automatically move toward the position coordinate in the movement control unit includes:
and the second client receives the following message and bakes out a Navmesh grid on the ground of the current virtual scene, wherein the Navmesh grid is used for controlling the user B to automatically move according to the shortest path between the user B and the target point, and the input of the target point is set as the position coordinate of the user A.
The system comprises a zone view starting unit, a zone view control unit and a zone view control unit, wherein the zone view starting unit is used for receiving a zone view instruction of a user A by a client and sending a zone view message to a client, wherein the zone view message is used for controlling a user B to enter a zone view state by the client, and when the user B enters the zone view state, the operation function of receiving a rotation visual angle instruction by the user B is closed;
the visual field synchronization unit is used for calculating the coordinates of the visual angle target of the user A, sending the coordinates to the client-side II, and controlling the camera of the user B in a separation following state to face the visual angle target by the client-side II. The method for calculating the coordinates of the view angle target of the user A in the view synchronization unit comprises the following steps:
calculating coordinates of diagonal crossing points of a virtual scene display interface;
transmitting a ray to the coordinate by taking a camera of the user A as a starting point;
obtaining an object collided on the rays through collision detection as a visual angle target;
and acquiring coordinates of a collision part of the image pick-up and the surface of the object as coordinates of a visual angle target.
Embodiment 2 is essentially the same as embodiment 1, and therefore the operation principle between the respective unit modules is not described in detail.
Example 3
The embodiment provides a storage medium, which comprises a storage program area and a storage data area, wherein the storage program area can store an operating system, a program required by running an instant messaging function and the like; the storage data area can store various instant messaging information, operation instruction sets and the like. A computer program is stored in the stored program area, which when executed by a processor implements the method of carrying out a person viewing in a virtual character interaction scene as described in embodiment 1. The processor may include one or more Central Processing Units (CPUs) or a digital processing unit or the like.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. The present embodiments are, therefore, to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein.
Furthermore, it should be understood that although the present disclosure describes embodiments, not every embodiment is provided with a separate embodiment, and that this description is provided for clarity only, and that the disclosure is not limited to the embodiments described in detail below, and that the embodiments described in the examples may be combined as appropriate to form other embodiments that will be apparent to those skilled in the art.

Claims (8)

1. A method for viewing with a person in a virtual character interaction scene, defining a user a of a first client as a followed person and a user B of a second client as a followed person, the method comprising the steps of:
step S1, a first client obtains a following instruction of a user A and sends a following message to a second client, wherein the following message is used for the second client to control the user B to enter a following state;
step S2, a client sends the position coordinates of the user A to the client II in real time, and the client II controls the user B to automatically move towards the position coordinates;
step S3, a second client calculates the distance between the user B and the user A in real time, when the distance is smaller than the preset shortest distance, the second client controls the user B to be separated from a following state, and when the distance is larger than the preset shortest distance, the step S2 and the step S3 are repeatedly executed;
step S4, the first client receives a tape-looking instruction of the user A and sends a tape-looking message to the second client, wherein the tape-looking message is used for the second client to control the user B to enter a tape-looking state;
step S5, calculating coordinates of the view angle target of the user a, and sending the coordinates to the client B, where the coordinates are used by the client B to control the camera of the user B to face the view angle target, and the method for calculating the coordinates of the view angle target of the user a includes:
calculating coordinates of diagonal crossing points of a virtual scene display interface;
transmitting a ray to the coordinate by taking a camera of the user A as a starting point;
obtaining an object collided on the rays through collision detection as a visual angle target;
and acquiring coordinates of a collision part of the image pick-up and the surface of the object as coordinates of a visual angle target.
2. The method of mannequin viewing in a virtual character interaction scene of claim 1, wherein the method of controlling the user B to automatically move toward the location coordinates by the second client comprises:
and the second client receives the following message and bakes out a Navmesh grid on the ground of the current virtual scene, wherein the Navmesh grid is used for controlling the user B to automatically move according to the shortest path between the user B and the target point, and the input of the target point is set as the position coordinate of the user A.
3. The method for mannequin viewing in a virtual character interaction scene according to claim 1, wherein the preset shortest distance has a length ranging from 0.1 to 1m, and the shortest distance has a length unit set in the virtual scene.
4. The method of claim 1, further comprising turning off an operation function of receiving a movement instruction by the user B when the user B enters a following state.
5. The method of claim 1, further comprising turning off an operation function of receiving a rotation angle of view instruction from the user B when the user B enters the viewing state.
6. A system for viewing with a person in a virtual character interaction scene, the system comprising:
the following starting unit is used for the first client to acquire the following instruction of the user A and send a following message to the second client, wherein the following message is used for the second client to control the user B to enter a following state;
the mobile control unit is used for sending the position coordinates of the user A to the client II in real time by the client II, and the client II controls the user B to automatically move towards the position coordinates;
the following ending unit is used for calculating the distance between the user B and the user A in real time by the second client, controlling the user B to be separated from a following state by the second client when the distance is smaller than the preset shortest distance, and repeatedly executing the mobile control unit and the following ending unit when the distance is larger than the preset shortest distance;
the system comprises a tape-watching starting unit, a first client and a second client, wherein the tape-watching starting unit is used for receiving a tape-watching instruction of a user A and sending a tape-watching message to the second client, and the tape-watching message is used for controlling the second client to control the user B to enter a tape-watching state;
a view synchronization unit, configured to calculate coordinates of a view target of the user a, send the coordinates to the client B, where the coordinates are used for the client B to control a camera of the user B in a state of being separated from and following toward the view target, and a method for calculating the coordinates of the view target of the user a in the view synchronization unit includes:
calculating coordinates of diagonal crossing points of a virtual scene display interface;
transmitting a ray to the coordinate by taking a camera of the user A as a starting point;
obtaining an object collided on the rays through collision detection as a visual angle target;
and acquiring coordinates of a collision part of the image pick-up and the surface of the object as coordinates of a visual angle target.
7. The system for mannequin viewing in a virtual character interaction scene according to claim 6, wherein the method for controlling the user B to automatically move toward the location coordinates by the second client in the movement control unit comprises:
and the second client receives the following message and bakes out a Navmesh grid on the ground of the current virtual scene, wherein the Navmesh grid is used for controlling the user B to automatically move according to the shortest path between the user B and the target point, and the input of the target point is set as the position coordinate of the user A.
8. A storage medium having stored thereon a computer program which, when executed by a processor, implements the method of mannequin viewing in a virtual character interaction scene as claimed in any one of claims 1 to 5.
CN202311439200.2A 2023-11-01 2023-11-01 Method, system and storage medium for viewing with person in virtual character interaction scene Active CN117170504B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311439200.2A CN117170504B (en) 2023-11-01 2023-11-01 Method, system and storage medium for viewing with person in virtual character interaction scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311439200.2A CN117170504B (en) 2023-11-01 2023-11-01 Method, system and storage medium for viewing with person in virtual character interaction scene

Publications (2)

Publication Number Publication Date
CN117170504A CN117170504A (en) 2023-12-05
CN117170504B true CN117170504B (en) 2024-01-19

Family

ID=88937829

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311439200.2A Active CN117170504B (en) 2023-11-01 2023-11-01 Method, system and storage medium for viewing with person in virtual character interaction scene

Country Status (1)

Country Link
CN (1) CN117170504B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117679745B (en) * 2024-02-01 2024-04-12 南京维赛客网络科技有限公司 Method, system and medium for controlling virtual character orientation through multi-angle dynamic detection

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109213834A (en) * 2017-06-29 2019-01-15 深圳市掌网科技股份有限公司 A kind of guidance method and system based on augmented reality
CN110689623A (en) * 2019-08-20 2020-01-14 重庆特斯联智慧科技股份有限公司 Tourist guide system and method based on augmented reality display
CN110732135A (en) * 2019-10-18 2020-01-31 腾讯科技(深圳)有限公司 Virtual scene display method and device, electronic equipment and storage medium
WO2020139409A1 (en) * 2018-12-27 2020-07-02 Facebook Technologies, Llc Virtual spaces, mixed reality spaces, and combined mixed reality spaces for improved interaction and collaboration
CN111881861A (en) * 2020-07-31 2020-11-03 北京市商汤科技开发有限公司 Display method, device, equipment and storage medium
CN111984114A (en) * 2020-07-20 2020-11-24 深圳盈天下视觉科技有限公司 Multi-person interaction system based on virtual space and multi-person interaction method thereof
CN112817453A (en) * 2021-01-29 2021-05-18 聚好看科技股份有限公司 Virtual reality equipment and sight following method of object in virtual reality scene
CN113181650A (en) * 2021-05-31 2021-07-30 腾讯科技(深圳)有限公司 Control method, device, equipment and storage medium for calling object in virtual scene
CN113608613A (en) * 2021-07-30 2021-11-05 建信金融科技有限责任公司 Virtual reality interaction method and device, electronic equipment and computer readable medium
CN115639976A (en) * 2022-10-28 2023-01-24 深圳市数聚能源科技有限公司 Multi-mode and multi-angle synchronous display method and system for virtual reality content
CN116051044A (en) * 2023-02-03 2023-05-02 南京维赛客网络科技有限公司 Online management method, system and storage medium for personnel in virtual scene

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9818225B2 (en) * 2014-09-30 2017-11-14 Sony Interactive Entertainment Inc. Synchronizing multiple head-mounted displays to a unified space and correlating movement of objects in the unified space
US20180059812A1 (en) * 2016-08-22 2018-03-01 Colopl, Inc. Method for providing virtual space, method for providing virtual experience, program and recording medium therefor
CN113168007B (en) * 2018-09-25 2023-09-15 奇跃公司 System and method for augmented reality
US11704874B2 (en) * 2019-08-07 2023-07-18 Magic Leap, Inc. Spatial instructions and guides in mixed reality

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109213834A (en) * 2017-06-29 2019-01-15 深圳市掌网科技股份有限公司 A kind of guidance method and system based on augmented reality
WO2020139409A1 (en) * 2018-12-27 2020-07-02 Facebook Technologies, Llc Virtual spaces, mixed reality spaces, and combined mixed reality spaces for improved interaction and collaboration
CN110689623A (en) * 2019-08-20 2020-01-14 重庆特斯联智慧科技股份有限公司 Tourist guide system and method based on augmented reality display
CN110732135A (en) * 2019-10-18 2020-01-31 腾讯科技(深圳)有限公司 Virtual scene display method and device, electronic equipment and storage medium
CN111984114A (en) * 2020-07-20 2020-11-24 深圳盈天下视觉科技有限公司 Multi-person interaction system based on virtual space and multi-person interaction method thereof
CN111881861A (en) * 2020-07-31 2020-11-03 北京市商汤科技开发有限公司 Display method, device, equipment and storage medium
CN112817453A (en) * 2021-01-29 2021-05-18 聚好看科技股份有限公司 Virtual reality equipment and sight following method of object in virtual reality scene
CN113181650A (en) * 2021-05-31 2021-07-30 腾讯科技(深圳)有限公司 Control method, device, equipment and storage medium for calling object in virtual scene
CN113608613A (en) * 2021-07-30 2021-11-05 建信金融科技有限责任公司 Virtual reality interaction method and device, electronic equipment and computer readable medium
CN115639976A (en) * 2022-10-28 2023-01-24 深圳市数聚能源科技有限公司 Multi-mode and multi-angle synchronous display method and system for virtual reality content
CN116051044A (en) * 2023-02-03 2023-05-02 南京维赛客网络科技有限公司 Online management method, system and storage medium for personnel in virtual scene

Also Published As

Publication number Publication date
CN117170504A (en) 2023-12-05

Similar Documents

Publication Publication Date Title
US11632530B2 (en) System and method for presenting virtual reality content to a user
US10401960B2 (en) Methods and systems for gaze-based control of virtual reality media content
US20210252398A1 (en) Method and system for directing user attention to a location based game play companion application
CN109889914B (en) Video picture pushing method and device, computer equipment and storage medium
US10127632B1 (en) Display and update of panoramic image montages
US20190179511A1 (en) Information processing device, information processing method, and program
CN117170504B (en) Method, system and storage medium for viewing with person in virtual character interaction scene
JP2019139672A (en) Information processing apparatus, image creation method, and computer program
JP7249975B2 (en) Method and system for directing user attention to location-based gameplay companion applications
US11461942B2 (en) Generating and signaling transition between panoramic images
CN112148125A (en) AR interaction state control method, device, equipment and storage medium
US20230260235A1 (en) Information processing apparatus, information processing method, and information processing system
US11417049B2 (en) Augmented reality wall with combined viewer and camera tracking
CN114625468A (en) Augmented reality picture display method and device, computer equipment and storage medium
US11776232B2 (en) Virtual 3D pointing and manipulation
TWI794512B (en) System and apparatus for augmented reality and method for enabling filming using a real-time display
US11962954B2 (en) System and method for presenting virtual reality content to a user
CN117504279A (en) Interactive processing method and device in virtual scene, electronic equipment and storage medium
CN117861200A (en) Information processing method and device in game, electronic equipment and storage medium
CN116188737A (en) Control method for visual angle follow-up movement in VR scene
CN114567768A (en) Interaction method and system based on VR technology
CN116980556A (en) Virtual image display method and device, storage medium and electronic equipment
Takacs METAMORPHOSIS: Towards Immersive Interactive Film

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant