CN112684883A - Method and system for multi-user object distinguishing processing - Google Patents

Method and system for multi-user object distinguishing processing Download PDF

Info

Publication number
CN112684883A
CN112684883A CN202011510566.0A CN202011510566A CN112684883A CN 112684883 A CN112684883 A CN 112684883A CN 202011510566 A CN202011510566 A CN 202011510566A CN 112684883 A CN112684883 A CN 112684883A
Authority
CN
China
Prior art keywords
virtual object
interactive virtual
users
interactive
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011510566.0A
Other languages
Chinese (zh)
Inventor
胡金鑫
孙立
刘晖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Shadow Creator Information Technology Co Ltd
Original Assignee
Shanghai Shadow Creator Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Shadow Creator Information Technology Co Ltd filed Critical Shanghai Shadow Creator Information Technology Co Ltd
Priority to CN202011510566.0A priority Critical patent/CN112684883A/en
Publication of CN112684883A publication Critical patent/CN112684883A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • User Interface Of Digital Computer (AREA)

Abstract

The invention provides a method and a system for distinguishing and processing multi-user objects, which comprises the following steps: defining a virtual object in a virtual environment as a non-interactive virtual object or an interactive virtual object; the virtual environment is displayed to a plurality of users by a plurality of VR glasses at the same time, and the users refer to VR glasses wearers; for the non-interactive virtual object, displaying images of the non-interactive virtual object at the same view angle to the plurality of users; and for the interactive virtual object, respectively displaying different images of the interactive virtual object under each user visual angle to the plurality of users. Aiming at all users, the invention greatly reduces the required computer operation resources on the premise of not influencing interaction and multi-user multi-view angle.

Description

Method and system for multi-user object distinguishing processing
Technical Field
The invention relates to the field of VR glasses, in particular to a method and a system for multi-user object distinguishing processing.
Background
Patent document CN109819236A discloses a multi-user VR live broadcast system based on binocular video of an unmanned aerial vehicle, which includes a camera driving module, a video encoding module, a wireless image transmission module, a streaming media distribution server, a VR video playing module, and a VR control server; the camera driving module provides a data source for the video coding module; the video coding module sends out the video data of the coding output buffer area in real time through an RTP protocol; the wireless image transmission module provides a point-to-point local area network environment for the transmission of video data; the streaming media distribution server comprises a receiving unit and a forwarding unit for realizing video data; the VR video playing module is mainly used for receiving, decoding and rendering video data; the VR control server establishes long connection with the server in a heartbeat mode, judges that the server is abnormally connected with the equipment through a network, pushes an alarm message of network connection disconnection, and removes the abnormality when receiving a data packet; the method has the characteristics of convenience, interactivity, authenticity and multiple users.
However, the disadvantage of the patent document CN109819236A is that there is only one view angle that multiple users can view, i.e. images captured from the view angle of the binocular camera of the unmanned aerial vehicle, and there is no difference between the contents seen by the respective users.
Patent document CN108965858B discloses a multi-view stereoscopic video multi-user access control method and device supporting VR, where the method includes: the server sends a response join message to the mobile user terminal after receiving the request join message sent by the mobile user terminal, and when a preset time interval is reached, according to the satisfaction gain, the view field size and the detail coding complexity upper limit information of the user carried in the request join message by the mobile user terminal, the Stackelberg game theory is adopted to analyze, decide and calculate all the users, determine the target user allowed to obtain the multi-view video service, the unit bandwidth price of each user and the detail coding complexity distributed to each target user, use the unit bandwidth price as a calculation result, and then send the bandwidth price message to all the mobile user terminals, wherein the target mobile user terminal can receive the multi-view video service according to the calculation result. The invention realizes the technical effects of improving the network transmission efficiency and achieving the maximum income of the whole network resources.
However, the drawback of the patent document CN108965858B is that it is necessary to obtain non-technical parameters such as the price per bandwidth for each user, and these non-technical parameters belong to agreement information between the user and the broadband operator and are generally not disclosed to the outside, so it is difficult for VR glasses manufacturers to obtain non-transparent data such as the price in practice, and performing decision calculation for each user also occupies a large amount of computing resources.
Disclosure of Invention
In view of the defects in the prior art, the invention aims to provide a method and a system for multi-user object distinguishing processing.
The method for the multi-user object distinguishing processing provided by the invention comprises the following steps:
an interactivity defining step: defining a virtual object in a virtual environment as a non-interactive virtual object or an interactive virtual object; the virtual environment is displayed to a plurality of users by a plurality of VR glasses at the same time, and the users refer to VR glasses wearers;
non-interactive virtual object processing: for the non-interactive virtual object, displaying images of the non-interactive virtual object at the same view angle to the plurality of users;
an interactive virtual object processing step: and for the interactive virtual object, respectively displaying different images of the interactive virtual object under each user visual angle to the plurality of users.
Preferably, in the non-interactive virtual object processing step, for a non-interactive virtual object, images of the non-interactive virtual object at the same viewing angle and the same viewing distance are displayed to the plurality of users.
Preferably, the virtual object with the observation distance greater than or equal to the distance threshold is defined as a non-interactive virtual object; and defining the virtual object with the observation distance smaller than the distance threshold value as the interactive virtual object.
Preferably, initially, all virtual objects are defined as non-interactive virtual objects; the non-interactive virtual object is positioned as an interactive virtual object after interacting with any one of the plurality of users.
Preferably, if the time that the interactive virtual object is not interacted with by any user of the plurality of users is greater than or equal to the time threshold, the interactive virtual object is defined as a non-interactive virtual object until the interactive virtual object is defined again after any user interacts with the interactive virtual object.
The system for the multi-user object distinguishing processing provided by the invention comprises the following components:
the interactivity definition module: defining a virtual object in a virtual environment as a non-interactive virtual object or an interactive virtual object; the virtual environment is displayed to a plurality of users by a plurality of VR glasses at the same time, and the users refer to VR glasses wearers;
a non-interactive virtual object processing module: for the non-interactive virtual object, displaying images of the non-interactive virtual object at the same view angle to the plurality of users;
an interoperable virtual object processing module: and for the interactive virtual object, respectively displaying different images of the interactive virtual object under each user visual angle to the plurality of users.
Preferably, in the non-interactive virtual object processing module, for a non-interactive virtual object, images of the non-interactive virtual object at the same viewing angle and the same viewing distance are displayed to the plurality of users.
Preferably, the virtual object with the observation distance greater than or equal to the distance threshold is defined as a non-interactive virtual object; and defining the virtual object with the observation distance smaller than the distance threshold value as the interactive virtual object.
Preferably, initially, all virtual objects are defined as non-interactive virtual objects; the non-interactive virtual object is positioned as an interactive virtual object after interacting with any one of the plurality of users.
Preferably, if the time that the interactive virtual object is not interacted with by any user of the plurality of users is greater than or equal to the time threshold, the interactive virtual object is defined as a non-interactive virtual object until the interactive virtual object is defined again after any user interacts with the interactive virtual object.
Compared with the prior art, the invention has the following beneficial effects:
the invention divides the virtual object into interactive virtual object and non-interactive virtual object, and fixes the visual angle and distance of non-interactive virtual object, which saves a lot of computing resource for drawing virtual object in different visual angles, and at the same time, the invention can present images in different visual angles and distances to each user, and greatly reduces the needed computer computing resource without influencing interaction and multi-user multi-visual angle.
Drawings
Other features, objects and advantages of the invention will become more apparent upon reading of the detailed description of non-limiting embodiments with reference to the following drawings:
FIG. 1 is a flow chart of the steps of the method provided by the present invention.
Detailed Description
The present invention will be described in detail with reference to specific examples. The following examples will assist those skilled in the art in further understanding the invention, but are not intended to limit the invention in any way. It should be noted that it would be obvious to those skilled in the art that various changes and modifications can be made without departing from the spirit of the invention. All falling within the scope of the present invention.
The method for the multi-user object distinguishing processing provided by the invention comprises the following steps:
an interactivity defining step: defining a virtual object in a virtual environment as a non-interactive virtual object or an interactive virtual object; the virtual environment is displayed to a plurality of users by a plurality of VR glasses at the same time, and the users refer to VR glasses wearers; specifically, the virtual environment is an interactive scene, and a plurality of users have corresponding virtual cameras in the interactive scene, so that the visual angles of the users are different, and the virtual cameras can move at different positions and orientation angles under the control of the users, so as to obtain different visual angles. Under the interactive scene, the virtual objects can be shot by all the virtual cameras, for example, an interactive scene is outdoor picnic, the virtual objects such as picnic mats, dinner plates, food, flowers near the picnic, mountains far away, forests far away, deer herds far away and the like can be observed and seen by each user of a plurality of users, and after one user eats the food virtually for a bite, a bitten gap is formed on the virtual food, and the gap can be seen by other users.
Non-interactive virtual object processing: for the non-interactive virtual object, displaying images of the non-interactive virtual object at the same view angle to the plurality of users; in a preferred example, in the non-interactive virtual object processing step, for a non-interactive virtual object, images of the non-interactive virtual object at the same viewing angle and the same viewing distance are displayed to the plurality of users. Specifically, the virtual object is divided into the interactive virtual object and the non-interactive virtual object aiming at all users, and the observable visual angle and the distance of the non-interactive virtual object are fixed, so that a large amount of computing resources for drawing the virtual object from different visual angles are saved. Defining the virtual object with the observation distance larger than or equal to the distance threshold value as a non-interactive virtual object; and defining the virtual object with the observation distance smaller than the distance threshold value as the interactive virtual object. For example, a virtual object with an observation distance exceeding 100 meters in a virtual environment is positioned as a non-interactive object, for example, a remote mountain, a remote forest, a remote deer swarm, and a remote sea are defined as non-interactive objects, and each user is provided with only one view angle and distance image, if N users are in an interactive scene, the computing resources at N-1 user view angles can be saved, the more users, the more resources the present invention can save, and in a large interactive scene, for example, a virtual football field, a large amount of computing resources can be saved for hundreds or even thousands of users.
An interactive virtual object processing step: and for the interactive virtual object, respectively displaying different images of the interactive virtual object under each user visual angle to the plurality of users. Specifically, the interactive virtual object still presents images at different viewing angles and distances to each user, and on the premise of not influencing interaction and multi-user multi-viewing angles, required computer operation resources are greatly reduced through the non-interactive virtual object.
Further, initially, all virtual objects are defined as non-interactive virtual objects; the non-interactive virtual object is positioned as an interactive virtual object after interacting with any one of the plurality of users. And if the time that the interactive virtual object is not interacted by any user in the plurality of users is more than or equal to the time threshold, defining the interactive virtual object as a non-interactive virtual object until the interactive virtual object is defined again after any user interacts. Thereby realizing dynamic self-adaptive adjustment of the interactive definition of the virtual object.
The foregoing description of specific embodiments of the present invention has been presented. It is to be understood that the present invention is not limited to the specific embodiments described above, and that various changes or modifications may be made by one skilled in the art within the scope of the appended claims without departing from the spirit of the invention. The embodiments and features of the embodiments of the present application may be combined with each other arbitrarily without conflict.

Claims (10)

1. A method for multi-user object differentiation processing, comprising:
an interactivity defining step: defining a virtual object in a virtual environment as a non-interactive virtual object or an interactive virtual object; the virtual environment is displayed to a plurality of users by a plurality of VR glasses at the same time, and the users refer to VR glasses wearers;
non-interactive virtual object processing: for the non-interactive virtual object, displaying images of the non-interactive virtual object at the same view angle to the plurality of users;
an interactive virtual object processing step: and for the interactive virtual object, respectively displaying different images of the interactive virtual object under each user visual angle to the plurality of users.
2. The method according to claim 1, wherein in the non-interactive virtual object processing step, images of the non-interactive virtual objects at a same viewing angle and a same viewing distance are displayed to the plurality of users for the non-interactive virtual objects.
3. The method of multi-user object differentiation processing according to claim 1, characterized in that virtual objects with an observation distance greater than or equal to a distance threshold are defined as non-interactive virtual objects; and defining the virtual object with the observation distance smaller than the distance threshold value as the interactive virtual object.
4. The method of multi-user object differentiation processing according to claim 1, characterized in that initially all virtual objects are defined as non-interactive virtual objects; the non-interactive virtual object is positioned as an interactive virtual object after interacting with any one of the plurality of users.
5. The method according to claim 4, wherein the time that the interactive virtual object is not interacted with by any user of the plurality of users is greater than or equal to a time threshold, the interactive virtual object is defined as a non-interactive virtual object, and the interactive virtual object is defined again after any user is interacted with.
6. A system for multi-user object differentiation processing, comprising:
the interactivity definition module: defining a virtual object in a virtual environment as a non-interactive virtual object or an interactive virtual object; the virtual environment is displayed to a plurality of users by a plurality of VR glasses at the same time, and the users refer to VR glasses wearers;
a non-interactive virtual object processing module: for the non-interactive virtual object, displaying images of the non-interactive virtual object at the same view angle to the plurality of users;
an interoperable virtual object processing module: and for the interactive virtual object, respectively displaying different images of the interactive virtual object under each user visual angle to the plurality of users.
7. The system for multi-user object differentiation processing according to claim 6, wherein in said non-interactive virtual object processing module, for non-interactive virtual objects, images of non-interactive virtual objects at the same viewing angle and at the same viewing distance are displayed to said plurality of users.
8. The system for multi-user object differentiation processing according to claim 6, characterized in that virtual objects having an observation distance greater than or equal to a distance threshold are defined as non-interactive virtual objects; and defining the virtual object with the observation distance smaller than the distance threshold value as the interactive virtual object.
9. The system for multi-user object differentiation processing according to claim 6, characterized in that initially all virtual objects are defined as non-interactive virtual objects; the non-interactive virtual object is positioned as an interactive virtual object after interacting with any one of the plurality of users.
10. The system for multi-user object differentiation processing according to claim 9, wherein the time that the interactable virtual object is not interacted with by any user of said plurality of users is greater than or equal to a time threshold, the interactable virtual object is defined as a non-interactive virtual object until being re-defined as an interactable virtual object after any user interaction.
CN202011510566.0A 2020-12-18 2020-12-18 Method and system for multi-user object distinguishing processing Pending CN112684883A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011510566.0A CN112684883A (en) 2020-12-18 2020-12-18 Method and system for multi-user object distinguishing processing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011510566.0A CN112684883A (en) 2020-12-18 2020-12-18 Method and system for multi-user object distinguishing processing

Publications (1)

Publication Number Publication Date
CN112684883A true CN112684883A (en) 2021-04-20

Family

ID=75450008

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011510566.0A Pending CN112684883A (en) 2020-12-18 2020-12-18 Method and system for multi-user object distinguishing processing

Country Status (1)

Country Link
CN (1) CN112684883A (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140368534A1 (en) * 2013-06-18 2014-12-18 Tom G. Salter Concurrent optimal viewing of virtual objects
CN104603865A (en) * 2012-05-16 2015-05-06 丹尼尔·格瑞贝格 A system worn by a moving user for fully augmenting reality by anchoring virtual objects
CN105894567A (en) * 2011-01-07 2016-08-24 索尼互动娱乐美国有限责任公司 Scaling pixel depth values of user-controlled virtual object in three-dimensional scene
US20170038837A1 (en) * 2015-08-04 2017-02-09 Google Inc. Hover behavior for gaze interactions in virtual reality
CN107450721A (en) * 2017-06-28 2017-12-08 丝路视觉科技股份有限公司 A kind of VR interactive approaches and system
US20180342103A1 (en) * 2017-05-26 2018-11-29 Microsoft Technology Licensing, Llc Using tracking to simulate direct tablet interaction in mixed reality
CN108965858A (en) * 2018-08-31 2018-12-07 华中师范大学 A kind of multi-viewpoint three-dimensional video multiple access control method and device for supporting VR
CN109876438A (en) * 2019-02-20 2019-06-14 腾讯科技(深圳)有限公司 Method for displaying user interface, device, equipment and storage medium
CN110163943A (en) * 2018-11-21 2019-08-23 深圳市腾讯信息技术有限公司 The rendering method and device of image, storage medium, electronic device
CN110507990A (en) * 2019-09-19 2019-11-29 腾讯科技(深圳)有限公司 Interactive approach, device, terminal and storage medium based on virtual aircraft
CN111408133A (en) * 2020-03-17 2020-07-14 腾讯科技(深圳)有限公司 Interactive property display method, device, terminal and storage medium

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105894567A (en) * 2011-01-07 2016-08-24 索尼互动娱乐美国有限责任公司 Scaling pixel depth values of user-controlled virtual object in three-dimensional scene
CN104603865A (en) * 2012-05-16 2015-05-06 丹尼尔·格瑞贝格 A system worn by a moving user for fully augmenting reality by anchoring virtual objects
US20140368534A1 (en) * 2013-06-18 2014-12-18 Tom G. Salter Concurrent optimal viewing of virtual objects
US20170038837A1 (en) * 2015-08-04 2017-02-09 Google Inc. Hover behavior for gaze interactions in virtual reality
US20180342103A1 (en) * 2017-05-26 2018-11-29 Microsoft Technology Licensing, Llc Using tracking to simulate direct tablet interaction in mixed reality
CN107450721A (en) * 2017-06-28 2017-12-08 丝路视觉科技股份有限公司 A kind of VR interactive approaches and system
CN108965858A (en) * 2018-08-31 2018-12-07 华中师范大学 A kind of multi-viewpoint three-dimensional video multiple access control method and device for supporting VR
CN110163943A (en) * 2018-11-21 2019-08-23 深圳市腾讯信息技术有限公司 The rendering method and device of image, storage medium, electronic device
CN109876438A (en) * 2019-02-20 2019-06-14 腾讯科技(深圳)有限公司 Method for displaying user interface, device, equipment and storage medium
CN110507990A (en) * 2019-09-19 2019-11-29 腾讯科技(深圳)有限公司 Interactive approach, device, terminal and storage medium based on virtual aircraft
CN111408133A (en) * 2020-03-17 2020-07-14 腾讯科技(深圳)有限公司 Interactive property display method, device, terminal and storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
YUKI KANETO ET AL.: "Space-sharing AR interaction on multiple mobile devices with a depth camera", 《IEEE XPLORE》, 7 July 2016 (2016-07-07) *
王焱, 吴威, 赵沁平: "基于Internet的多用户共享虚拟环境框架的研究", 计算机研究与发展, no. 03, 15 March 2002 (2002-03-15) *
陈岭: "多空间视角及相互感知对协同对象操作的影响研究", 《万方数据库》, 5 December 2006 (2006-12-05) *

Similar Documents

Publication Publication Date Title
Bao et al. Motion-prediction-based multicast for 360-degree video transmissions
US11153615B2 (en) Method and apparatus for streaming panoramic video
US20020056120A1 (en) Method and system for distributing video using a virtual set
CN111970524B (en) Control method, device, system, equipment and medium for interactive live broadcast and microphone connection
CN105847718B (en) Live video barrage display methods based on scene Recognition and its display device
WO2021190221A1 (en) Method for providing and method for acquiring immersive media, apparatus, device, and storage medium
KR102564729B1 (en) Method and apparatus for transmitting information on 3D content including a plurality of viewpoints
CN106454388B (en) A kind of method and apparatus for determining live streaming setting information
CN108401163B (en) Method and device for realizing VR live broadcast and OTT service system
CN110662119A (en) Video splicing method and device
WO2019048733A1 (en) Transmission of video content based on feedback
CN113542896B (en) Video live broadcast method, equipment and medium of free view angle
CN114449303A (en) Live broadcast picture generation method and device, storage medium and electronic device
CN108141564B (en) System and method for video broadcasting
CN112684883A (en) Method and system for multi-user object distinguishing processing
CN105872480A (en) System and method for controlling playing on LED screen based on real-time camera shooting of mobile phone
US20140327781A1 (en) Method for video surveillance, a related system, a related surveillance server, and a related surveillance camera
CN111131749B (en) Video conference control method and device
US9325963B2 (en) Device and method for rendering and delivering 3-D content
CN113641247A (en) Sight angle adjusting method and device, electronic equipment and storage medium
CN113014814A (en) Video acquisition method, video acquisition terminal and video live broadcast system
CN103019912A (en) Processing monitoring data in a monitoring system
CN108965959A (en) Broadcasting, acquisition methods, mobile phone, PC equipment and the system of VR video
CN111246253B (en) Video streaming method and device
CN112291577B (en) Live video sending method and device, storage medium and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination