CN115576427A - XR-based multi-user online live broadcast and system - Google Patents
XR-based multi-user online live broadcast and system Download PDFInfo
- Publication number
- CN115576427A CN115576427A CN202211322357.2A CN202211322357A CN115576427A CN 115576427 A CN115576427 A CN 115576427A CN 202211322357 A CN202211322357 A CN 202211322357A CN 115576427 A CN115576427 A CN 115576427A
- Authority
- CN
- China
- Prior art keywords
- participant
- information
- space
- display
- data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/14—Digital output to display device ; Cooperation and interconnection of the display device with other functional units
- G06F3/1454—Digital output to display device ; Cooperation and interconnection of the display device with other functional units involving copying of the display data of a local workstation or window to a remote workstation or window so that an actual copy of the data is displayed simultaneously on two or more displays, e.g. teledisplay
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Computer Graphics (AREA)
- Software Systems (AREA)
- Geometry (AREA)
- Computer Hardware Design (AREA)
- Information Transfer Between Computers (AREA)
Abstract
本说明书实施例提供一种基于XR的多人在线直播方法及系统,该方法包括:与至少两个参与者的终端建立通讯连接;创建虚拟空间,在虚拟空间里创建与至少两个参与者中每个参与者对应的虚拟人物;通过预设3D坐标位置算法,基于获取到的参与者在实际空间的位置数据,确定与参与者对应的虚拟人物在虚拟空间里的位置信息;在虚拟空间里基于虚拟人物的位置信息显示虚拟人物;获取参与者上传的共享数据,并在虚拟空间内展示共享数据。
The embodiment of this specification provides an XR-based multiplayer online live broadcast method and system. The method includes: establishing a communication connection with the terminals of at least two participants; The virtual character corresponding to each participant; through the preset 3D coordinate position algorithm, based on the obtained position data of the participant in the actual space, determine the position information of the virtual character corresponding to the participant in the virtual space; in the virtual space Display the avatar based on the location information of the avatar; obtain the shared data uploaded by the participants, and display the shared data in the virtual space.
Description
分案说明Case Description
本申请是针对申请日为2022年09月28日、申请号为2022111915615、发明名称为“一种基于XR的多人协同方法及系统”的中国申请提出的分案申请。This application is a divisional application filed against a Chinese application with a filing date of September 28, 2022, an application number of 2022111915615, and an invention title of "An XR-Based Multi-person Collaboration Method and System".
技术领域technical field
本说明书涉及通讯技术领域,特别涉及一种基于XR的多人在线直播的方法及系统。This specification relates to the field of communication technology, and in particular to a method and system for multiplayer online live broadcast based on XR.
背景技术Background technique
在当下的多人沟通的场景中,因为时间成本、交通成本,时常因为身处异地无法随时参加重要的会议。而目前市场的多人远程沟通方案,只能使用电脑和移动手机来展现平面的画面进行事物的说明与沟通。In the current multi-person communication scene, due to time cost and transportation cost, it is often impossible to attend important meetings at any time because of being in a different place. However, in the current multi-person remote communication solutions in the market, only computers and mobile phones can be used to display flat images for explanation and communication of things.
因此,希望提供一种基于XR的多人协同的方法和系统,能够为深处异地的参与者协同提供更直接更有效的沟通方式。Therefore, it is hoped to provide a method and system for multi-person collaboration based on XR, which can provide a more direct and effective communication method for the collaboration of participants in different places.
发明内容Contents of the invention
本说明书实施例之一提供一种基于XR的多人在线直播方法,包括:与至少两个参与者的终端建立通讯连接;创建虚拟空间,在虚拟空间里创建与至少两个参与者中每个参与者对应的虚拟人物;通过预设3D坐标位置算法,基于获取到的参与者在实际空间的位置数据,确定与参与者对应的虚拟人物在虚拟空间里的位置信息;在虚拟空间里基于虚拟人物的位置信息显示虚拟人物;获取参与者上传的共享数据,并在虚拟空间内展示共享数据。One of the embodiments of this specification provides an XR-based multiplayer online live broadcast method, including: establishing a communication connection with the terminals of at least two participants; creating a virtual space in which each of the at least two participants The virtual character corresponding to the participant; through the preset 3D coordinate position algorithm, based on the obtained position data of the participant in the actual space, determine the position information of the virtual character corresponding to the participant in the virtual space; in the virtual space based on the virtual The location information of the characters displays the virtual characters; obtains the shared data uploaded by the participants, and displays the shared data in the virtual space.
本说明书实施例之一提供一种基于XR的多人在线直播系统,包括:连接模块,用于与至少两个参与者的终端建立通讯连接;定位模块,用于创建虚拟空间,在虚拟空间里创建与至少两个参与者中每个参与者对应的虚拟人物;定位模块进一步用于通过预设3D坐标位置算法,基于获取到的参与者在实际空间的位置数据,确定与参与者对应的虚拟人物在虚拟空间里的位置信息;以及用于在虚拟空间里基于任务的所在位置信息显示虚拟人物;下载模块,用于获取参与者上传的共享数据;展示模块,用于在虚拟空间内展示共享数据。One of the embodiments of this specification provides an XR-based multiplayer online live broadcast system, including: a connection module, used to establish a communication connection with the terminals of at least two participants; a positioning module, used to create a virtual space, in the virtual space Create a virtual character corresponding to each participant in the at least two participants; the positioning module is further used to determine the virtual character corresponding to the participant based on the acquired position data of the participant in the actual space through a preset 3D coordinate position algorithm. The location information of the character in the virtual space; and it is used to display the virtual character based on the location information of the task in the virtual space; the download module is used to obtain the shared data uploaded by the participants; the display module is used to display the shared data in the virtual space data.
本说明书实施例之一提供一种基于XR的多人在线直播装置,该装置包括:至少一个存储介质,存储计算机指令;至少一个处理器,执行计算机指令,以实现上述所述的基于XR的多人在线直播方法。One of the embodiments of this specification provides an XR-based multiplayer online live broadcast device, which includes: at least one storage medium storing computer instructions; at least one processor executing computer instructions to realize the above-mentioned XR-based multiplayer Human online live broadcast method.
本说明书实施例之一提供一种计算机可读存储介质,该存储介质存储计算机指令,当计算机读取计算机指令时,该计算机执行上述所述的基于XR的多人在线直播方法。One of the embodiments of this specification provides a computer-readable storage medium, the storage medium stores computer instructions, and when the computer reads the computer instructions, the computer executes the XR-based multiplayer online live broadcast method described above.
附图说明Description of drawings
本说明书将以示例性实施例的方式进一步说明,这些示例性实施例将通过附图进行详细描述。这些实施例并非限制性的,在这些实施例中,相同的编号表示相同的结构,其中:This specification will be further illustrated by way of exemplary embodiments, which will be described in detail with the accompanying drawings. These examples are non-limiting, and in these examples, the same number indicates the same structure, wherein:
图1是示出根据本发明的一些实施例的基于XR的多人协同系统的应用场景的示意图;FIG. 1 is a schematic diagram showing an application scenario of an XR-based multi-person collaboration system according to some embodiments of the present invention;
图2是根据本说明书一些实施例所示的基于XR的多人协同系统的示例性模块图;Fig. 2 is an exemplary module diagram of an XR-based multi-person collaboration system according to some embodiments of the present specification;
图3是根据本说明书一些实施例所示的基于XR的多人协同方法的示例性流程图;Fig. 3 is an exemplary flow chart of an XR-based multi-person collaboration method according to some embodiments of this specification;
图4是根据本说明书一些实施例所示的确定参与者在虚拟空间的位置信息示例性方法流程图;Fig. 4 is a flow chart of an exemplary method for determining position information of a participant in a virtual space according to some embodiments of the present specification;
图5是根据本说明书一些实施例所示的基于XR的多人在线直播方法的示例性流程图;Fig. 5 is an exemplary flow chart of an XR-based multiplayer online live broadcast method according to some embodiments of this specification;
图6是根据本说明书一些实施例所示的位置信息实时更新的示例性流程图;Fig. 6 is an exemplary flowchart of real-time updating of location information according to some embodiments of this specification;
图7是根据本说明书一些实施例所示的确定子动作信息的展示优先级的示例性流程图;Fig. 7 is an exemplary flow chart of determining the presentation priority of sub-action information according to some embodiments of this specification;
图8是根据本说明书一些实施例所示的用于XR的数据处理方法的示例性流程图;Fig. 8 is an exemplary flowchart of a data processing method for XR according to some embodiments of the present specification;
图9是根据本说明书一些实施例所示的待标记内容展示的示例性流程图;Fig. 9 is an exemplary flow chart of displaying content to be marked according to some embodiments of this specification;
图10是根据本说明书一些实施例所示的确定预测展示内容的示例性流程图。Fig. 10 is an exemplary flow chart of determining predicted display content according to some embodiments of the present specification.
具体实施方式detailed description
为了更清楚地说明本说明书实施例的技术方案,下面将对实施例描述中所需要使用的附图作简单的介绍。显而易见地,下面描述中的附图仅仅是本说明书的一些示例或实施例,对于本领域的普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图将本说明书应用于其它类似情景。除非从语言环境中显而易见或另做说明,图中相同标号代表相同结构或操作。In order to more clearly illustrate the technical solutions of the embodiments of the present specification, the following briefly introduces the drawings that need to be used in the description of the embodiments. Apparently, the accompanying drawings in the following description are only some examples or embodiments of this specification, and those skilled in the art can also apply this specification to other similar scenarios. Unless otherwise apparent from context or otherwise indicated, like reference numerals in the figures represent like structures or operations.
应当理解,本文使用的“系统”、“装置”、“单元”和/或“模块”是用于区分不同级别的不同组件、元件、部件、部分或装配的一种方法。然而,如果其他词语可实现相同的目的,则可通过其他表达来替换所述词语。It should be understood that "system", "device", "unit" and/or "module" as used herein is a method for distinguishing different components, elements, parts, parts or assemblies of different levels. However, the words may be replaced by other expressions if other words can achieve the same purpose.
如本说明书和权利要求书中所示,除非上下文明确提示例外情形,“一”、“一个”、“一种”和/或“该”等词并非特指单数,也可包括复数。一般说来,术语“包括”与“包含”仅提示包括已明确标识的步骤和元素,而这些步骤和元素不构成一个排它性的罗列,方法或者设备也可能包含其它的步骤或元素。As indicated in the specification and claims, the terms "a", "an", "an" and/or "the" are not specific to the singular and may include the plural unless the context clearly indicates an exception. Generally speaking, the terms "comprising" and "comprising" only suggest the inclusion of clearly identified steps and elements, and these steps and elements do not constitute an exclusive list, and the method or device may also contain other steps or elements.
本说明书中使用了流程图用来说明根据本说明书的实施例的系统所执行的操作。应当理解的是,前面或后面操作不一定按照顺序来精确地执行。相反,可以按照倒序或同时处理各个步骤。同时,也可以将其他操作添加到这些过程中,或从这些过程移除某一步或数步操作。The flowchart is used in this specification to illustrate the operations performed by the system according to the embodiment of this specification. It should be understood that the preceding or following operations are not necessarily performed in the exact order. Instead, various steps may be processed in reverse order or simultaneously. At the same time, other operations can be added to these procedures, or a certain step or steps can be removed from these procedures.
图1是示出根据本发明的一些实施例的基于XR的多人协同系统的应用场景100的示意图。XR(Extended Reality)又名扩展现实,是虚拟现实(VR)、增强现实(AR)和混合现实(MR)等各种新沉浸式技术的统称。XR可以通过计算机将真实与虚拟相结合,打造一个可人机交互的虚拟空间。Fig. 1 is a schematic diagram showing an
如图1所示,基于XR的多人协同系统的应用场景100可包括处理设备110、网络120、存储设备130、终端140以及数据采集设备150。基于XR的多人协同系统的应用场景100中的组件可以以一种或多种不同的方式连接。例如,数据采集设备150可以通过网络120连接到处理设备110。例如,如图1所示,可以将数据采集设备150直接连接到处理设备110。As shown in FIG. 1 , an
在一些实施例中,基于XR的多人协同系统的应用场景100可以包括不在同一实际空间的多人需要协同的场景。例如,应用场景100可以包括学术会议、远程会诊、教学培训、手术指导以及直播等。基于XR的多人协同系统可以创建虚拟空间。应用场景100可以通过虚拟空间实现。例如,在手术指导的场景中,参与手术的医护人员可以在虚拟空间互动交流,分享医疗设备记录的病人的资料信息,还可以在虚拟空间中与专家进行实况直播,协助专家进行远程操作指导。进一步地,在虚拟空间中,还可以分享3D模型,如心脏模型等,并且可以对模型进行拆解,还可以将手术相关的视频影像资料呈现在该虚拟空间内,手术人员通过佩戴VR、AR设备等终端设备就可以共享资料信息。In some embodiments, the
数据采集设备150可配置为获取与参与者及参与者所在空间有关的音频、视频数据的设备。数据采集设备150可以包括全景相机151、普通摄像头152、动作传感器(图中未示出)等。The
处理设备110可以处理从存储设备130、终端140、和/或数据采集设备150获得的数据和/或信息。处理设备110可以包括服务器数据中心。在一些实施例中,处理设备110可以托管一个模拟的虚拟世界,或用于终端140的元域。例如,处理设备110可以基于数据采集设备150收集到的参与者的图像生成参与者的位置数据。再例如,处理设备110可以基于参与者的位置数据可以生成参与者在虚拟空间中的位置信息。
在一些实施例中,处理设备110可以是计算机、用户控制台、单个服务器或服务器组等。服务器组可以是集中式的也可以是分布式的。例如,元域的指定区域可以由单个服务器模拟。在一些实施例中,处理设备110可以包括多个专用于物理模拟的模拟服务器,以管理交互和处理元宇宙中字符和对象之间的碰撞。In some embodiments,
在一些实施例中,处理设备110可以在云平台上实现。例如,云平台可能包括私有云、公共云、混合云、社区云、分布式云和云间云、多云等,或它们的组合。In some embodiments, the
在一些实施例中,处理设备110可包括专用于存储与元世界中的对象和字符相关的数据的存储设备。存储在存储设备中的数据可以包括对象形状、化身形状和外观、音频剪辑、元世界相关的脚本和其他元世界相关的对象。在一些实施例中,处理设备110可以由具有处理器、存储器、输入/输出(I/O)、通信端口等的计算设备实现。在一些实施例中,处理设备110可在终端140的处理电路(例如,处理器、CPU)上实现。In some embodiments, the
终端140可能是一种允许用户参与虚拟现实体验的设备。在一些实施例中,终端140可以包括VR头盔、VR眼镜、VR补丁、立体头显或类似物、个人计算机(personalcomputer,PC)、手机,或它们的任何组合。例如,终端140可能包括谷歌GlassTM、OculusRiftTM、Gear VRTM等。具体地说,终端140可包括可在其上呈现和显示虚拟内容的显示设备141。用户可以通过显示设备141查看虚拟内容(例如,待标记内容、标记信息等)。
在一些实施例中,用户可以通过显示设备141与虚拟内容交互。例如,当用户佩戴显示设备141时,可以跟踪用户的头部运动和/或注视方向,从而呈现虚拟内容以响应用户位置和/或方向的变化,提供反映用户视角变化的沉浸式和令人信服的虚拟现实体验。In some embodiments, a user may interact with virtual content through the
在一些实施例中,终端140可以进一步包括输入组件142。输入组件142可使用户与显示设备141上显示的虚拟内容之间进行用户交互。其中,虚拟内容可以包括参与者上传的数据信息。例如,输入组件142可以包括被配置为接收用户输入的触摸传感器、麦克风等,这些用户输入可以提供给终端140并通过改变显示设备上呈现的视觉内容来控制虚拟世界。在一些实施例中,由输入组件接收的用户输入可包括,例如,触摸、语音输入和/或手势输入,并可通过任何合适的传感技术(例如,电容、电阻、声学、光学)感知。在一些实施例中,输入组件142可包括手柄、手套、触控笔、游戏机等。In some embodiments, terminal 140 may further include an
在一些实施例中,显示设备141(或处理设备110)可以跟踪输入组件142并基于对输入组件142的跟踪呈现虚拟元素。虚拟元素可以包括输入组件142的表示(例如,用户的手、手指的图像)。虚拟元素可以在与输入组件142的真实位置相对应的虚拟现实体验中的3D位置中呈现。In some embodiments, display device 141 (or processing device 110 ) may track
例如,一个或多个传感器可用于跟踪输入组件142。显示设备141可以通过有线或无线网络接收由一个或多个传感器从输入组件142收集的信号。所述信号可包括能够跟踪所述输入组件142的任何适当信息,如输入组件142中的一个或多个惯性测量单元(例如,加速度计、陀螺仪、磁力仪)的输出,输入组件142中的全球定位系统(GPS)传感器,或类似的,或其组合。For example, one or more sensors may be used to track
信号可以指示输入组件142的位置(例如,以一种三维坐标的形式)和/或方向(例如,以一种三维旋转坐标的形式)。在一些实施例中,传感器可包括用于跟踪输入组件142的一个或多个光学传感器。例如,传感器可以使用可见光和/或深度相机来定位输入组件142。The signal may indicate the position (eg, in a three-dimensional coordinate) and/or orientation (eg, in a three-dimensional rotational coordinate) of the
在一些实施例中,输入组件142可包括可向用户提供触觉反馈的触觉组件。例如,触觉组件可以包括多个力传感器、电机和/或致动器。力传感器可以测量用户施加的力的大小和方向,并将这些测量值输入到处理设备110。In some embodiments,
处理设备110可以将所输入的测量转换为可显示在显示设备141上的一个或多个虚拟元素(例如,虚拟手指、虚拟手掌等)的运动。然后,处理设备110可以计算一个或多个虚拟元素与至少一部分参与者之间的一个或多个交互作用,并将这些交互作用输出为计算机信号(即表示反馈力的信号)。触觉部件中的电机或致动器可以根据从处理设备110接收到的计算机信号向用户施加反馈力,以便参与者感受到手术指导中对象真实的触感。在一些实施例中,反馈力的大小可以根据基于XR的多人协同系统的默认设置或由用户或操作员通过,例如,终端设备(例如,终端140)预置设置。
在一些实施例中,基于XR的多人协同系统的应用场景100中可进一步包括配置为向用户提供音频信号的音频设备(未显示)。例如,音频设备(如扬声器)可以播放参与者发出的声音。在一些实施例中,音频设备可以包括电磁扬声器(例如,动圈扬声器、动铁扬声器等)、压电扬声器、静电扬声器(例如,冷凝器扬声器)或类似的,或其任何组合。在一些实施例中,音频设备可以集成到终端140中。在一些实施例中,终端140可以包括分别位于终端140的左右两侧的两个音频设备,以向用户的左右耳朵提供音频信号。In some embodiments, the
存储设备130可以用于存储数据和/或指令,例如,存储设备130可以用于存储数据采集设备150采集到的相关信息和/或数据。存储设备130可以从例如处理设备110等获得数据和/或指令。在一些实施例中,存储设备130可以储存处理设备130用来执行或使用以完成本说明书中描述的示例性方法的数据和/或指令。在一些实施例中,存储设备130可以集成于处理设备110上。The
网络120可以提供信息和/或数据交换的渠道。在一些实施例中,处理设备110、存储设备130、终端140以及数据采集设备150之间可以通过网络160交换信息。例如,终端140可以通过网络120从获取处理设备110发送的数据信息等。
需要指出的是,上述对基于XR的多人协同系统的应用场景100的描述仅为说明目的,并不意在限制本公开的范围。例如,基于XR的多人协同系统的应用场景100的装配和/或功能可能会根据具体的实施场景而变化或改变。在一些实施例中,基于XR的多人协同系统的应用场景100可以包括一个或多个附加组件(例如,存储设备、网络等)和/或可以省略上述基于XR的多人协同系统的应用场景100的一个或多个组件。另外,基于XR的多人协同系统的应用场景100的两个或多个组件可以集成到一个组件中。基于XR的多人协同系统的应用场景100的一个组件可以在两个或多个子组件上实现。It should be pointed out that the above description of the
图2根据本说明书一些实施例所示的基于XR的多人协同系统200的示例性模块图。在一些实施例中,基于XR的多人协同系统200可以包括连接模块210、定位模块220、下载模块230、展示模块240、生成模块250。Fig. 2 is an exemplary block diagram of an XR-based
连接模块210可以用于与至少两个参与者的终端建立通讯连接。The
定位模块220可以用于通过预设3D坐标位置算法确定至少两个参与者在虚拟空间的位置信息。The
在一些实施例中,至少两个参与者在虚拟空间的位置信息与至少两个参与者在实际空间的位置数据相关,至少两个参与者在实际空间的位置数据通过至少两个参与者的终端获取。In some embodiments, the position information of the at least two participants in the virtual space is related to the position data of the at least two participants in the real space, and the position data of the at least two participants in the real space are passed through the terminals of the at least two participants Obtain.
在一些实施例中,定位模块220可以进一步用于创建虚拟空间;在虚拟空间里创建与至少两个参与者中每个参与者对应的虚拟人物,其中,虚拟人物在虚拟空间具有初始位置信息;获取参与者在实际空间的位置数据,并将位置数据与对应的虚拟人物在虚拟空间的位置信息相关联;基于参与者的位置数据,获取参与者在实际空间的移动数据;通过预设3D坐标位置算法,基于移动数据对初始位置信息进行更新,确定更新后的位置信息。In some embodiments, the
在一些实施例中,定位模块220可以用于创建虚拟空间,并用于在虚拟空间里创建与至少两个参与者中每个参与者对应的虚拟人物。在一些实施例中,定位模块220进一步用于通过预设3D坐标位置算法,基于获取到的参与者在实际空间的位置数据,确定与参与者对应的虚拟人物在虚拟空间里的位置信息。在一些实施例中,定位模块220可以用于在虚拟空间里基于任务的所在位置信息显示虚拟人物。In some embodiments, the
在一些实施例中,定位模块220可以进一步用于扫描参与者所在的实际空间,对参与者进行空间定位;对于完成扫描的参与者,确定参与者在实际空间的实时位置数据;基于实时位置数据确定参与者在实际空间的第一移动信息;确定虚拟人物在虚拟空间的初始位置信息;获取参与者在实际空间的第一动作信息;第一动作信息包括参与者身体的各个部位的子动作信息;通过预设3D坐标位置算法,基于第一移动信息和/或第一动作信息同步更新虚拟人物的第二移动信息和/或第二动作信息。In some embodiments, the
在一些实施例中,定位模块220可以进一步用于基于当前场景,判断参与者的至少一个核心身体部位;基于至少一个核心身体部位确定参与者的身体各个部位的子动作信息的展示优先级;基于子动作信息的展示优先级确定动作信息的展示参数,展示参数包括展示频率、展示精度;基于展示参数同步与参与者对应的虚拟人物的第二动作信息。In some embodiments, the
下载模块230可以用于保存至少两个参与者上传的数据信息,并向至少两个参与者提供数据下载服务,其中,数据下载服务包括创建数据下载通道、提供下载资源中的至少一种。The
在一些实施例中,下载模块230可以用于获取参与者上传的共享数据。In some embodiments, the
展示模块240可以用于在至少两个参与者的终端上同步展示数据信息。The
在一些实施例中,终端包括VR显像设备、AR显像设备、移动端手机、PC电脑端中的至少一种。In some embodiments, the terminal includes at least one of a VR display device, an AR display device, a mobile phone, and a PC computer.
在一些实施例中,展示模块240可以用于在虚拟空间内展示共享数据。In some embodiments, the
在一些实施例中,展示模块240可以用于在虚拟空间创建至少一个第二空间和/或第二窗口,其中,至少一个第二空间和/或第二窗口中的每一个与一个参与者相对应;通过第二空间和/或第二窗口展示对应的参与者的共享数据。In some embodiments, the
在一些实施例中,展示模块240可以用于在画布上展示待标记内容,其中,待标记内容为已被标注过的数据和/或未标注过的原始数据;获取标注请求者运用射线交互系统在画布上创建的标记信息,其中,标记信息包括标记内容及标记路径;将待标记内容及标记信息分享至其他参与者的终端进行展示。In some embodiments, the
在一些实施例中,待标记内容为在标注请求者的终端上的多个窗口中任意窗口、任意位置展示的内容。In some embodiments, the content to be marked is the content displayed in any window and any position of multiple windows on the terminal of the marking requester.
在一些实施例中,展示模块240可以进一步用于获取标注请求者的展示设置,展示设置包括实时标记展示和标记完成后展示;基于展示设置,将待标记内容及其标记信息分享至其他参与者的终端进行展示。In some embodiments, the
在一些实施例中,展示模块240可以进一步用于基于其他参与者中每个参与者的位置信息确定每个参与者的视角信息;基于每个参与者的视角信息,确定每个参与者的展示内容,并进行展示,展示内容包括在视角信息下的待标记内容和/或标记信息。In some embodiments, the
生成模块250可以用于响应于标注请求者的请求,在虚拟空间内创建画布。The
需要注意的是,以上对于系统及其模块的描述,仅为描述方便,并不能把本申请限制在所举实施例范围之内。可以理解,对于本领域的技术人员来说,在了解该系统的原理后,可能在不背离这一原理的情况下,对各个模块进行任意组合,或者构成子系统与其他模块连接。例如,连接模块210、定位模块220、下载模块230和展示模块240可以组合构成基于XR的多人在线直播系统。又例如,展示模块240和生成模块250可以组合构成用于XR的数据处理系统。诸如此类的变形,均在本申请的保护范围之内。It should be noted that the above description of the system and its modules is only for convenience of description, and does not limit the application to the scope of the illustrated embodiments. It can be understood that for those skilled in the art, after understanding the principle of the system, it is possible to combine various modules arbitrarily, or form a subsystem to connect with other modules without departing from this principle. For example, the
图3是根据本说明书一些实施例所示的多人协同方法的示例性流程图。如图3所示,流程300可以包括以下步骤。Fig. 3 is an exemplary flow chart of a multi-person collaboration method according to some embodiments of this specification. As shown in FIG. 3 , the
步骤310,与至少两个参与者的终端建立通讯连接。在一些实施例中,步骤310可以由连接模块210执行。
参与者可以指参与协同的人员。协同场景不同,参与者可以不同。例如,手术室VR场景中,参与者可以包括作业人员(例如,参与手术的医生)、远程专家、运营人员。其中,运营人员可以进行用户权限管理、将指导过程存档回溯。A participant may refer to a person participating in a collaboration. Different collaboration scenarios require different participants. For example, in the operating room VR scene, participants may include operating personnel (for example, doctors participating in the operation), remote experts, and operating personnel. Among them, operating personnel can manage user rights and archive and trace back the guidance process.
多人协同可以用于学术会议、远程会诊、医学培训、手术直播、医疗器械培训等,在多人协同时,可以实现多人实时直播、实时批注共享等。多人实时直播的详细内容可以参见本说明书其他部分的描述,例如,图5、6、7。实时批注共享的详细内容可以参见本说明书其他部分的描述,例如,图8、9、10。Multi-person collaboration can be used for academic conferences, remote consultations, medical training, surgical live broadcast, medical device training, etc. When multi-person collaboration, real-time multi-person live broadcast, real-time annotation sharing, etc. can be realized. For details of multi-person real-time live broadcast, refer to descriptions in other parts of this specification, for example, FIGS. 5, 6, and 7. For details of real-time annotation sharing, refer to descriptions in other parts of this specification, for example, FIGS. 8 , 9 , and 10 .
在一些实施例中,可以建立服务器数据中心,参与者通过参与者的终端与服务器数据中心连接来实现通讯连接。In some embodiments, a server data center can be established, and participants realize communication connections by connecting participants' terminals to the server data center.
服务器数据中心可以为搭载多人协同即时沟通的平台。服务器数据中心可以包括至少一个服务器,该服务器的性能能够满足多人协同操作的要求,不仅可以使服务器数据中心供多人多端的接入,并且可以保证中多个终端接入时的稳定性与实时性,还可以保证服务器数据的安全和完整。The server data center can be a platform equipped with multi-person collaborative instant communication. The server data center can include at least one server. The performance of the server can meet the requirements of multi-person collaborative operation, not only can the server data center provide multi-person and multi-terminal access, but also can ensure the stability and stability of multiple terminals when accessing. Real-time performance can also ensure the security and integrity of server data.
在一些实施例中,通讯连接可以用于音频即时交流、视频即时交流、多人虚拟空间技术交流等。其中,音频即时交流可以包括音频的信息记录、传送和接收。视频即时交流可以包括视频信息的记录、解码、传送和接收。多人协同可以通过多人虚拟空间技术交流来实现。例如,手术直播时可以邀请异地的专家进行远程协助。再例如,受邀的参与者可以对发出邀请的参与者直播的画面进行查看并和其他参与者沟通,并给与帮助。再例如,参与者还可以使用标记功能进行本地标注,并可以将标记内容实时展示给其他参与者。In some embodiments, the communication connection can be used for audio instant communication, video instant communication, multi-person virtual space technology exchange, and the like. Wherein, audio instant communication may include audio information recording, transmission and reception. Instant video communication can include the recording, decoding, transmission and reception of video information. Multi-person collaboration can be realized through multi-person virtual space technology communication. For example, during the live broadcast of surgery, experts from different places can be invited to provide remote assistance. For another example, the invited participant can view the live broadcast of the invited participant, communicate with other participants, and provide assistance. For another example, participants can also use the marking function to mark locally, and display the marked content to other participants in real time.
参与者的终端可以指参与者参与协同所使用的设备。在一些实施例中,参与者的终端可以包括用于实现参与者与服务器数据中心连接的设备以及数据采集设备。数据采集设备是用于采集参与者所在的实际空间音频、视频等数据的设备,例如,全景相机、普通摄像头、AR眼镜、手机、动作传感器、深度摄像机等。参与者终端还可以包括显示设备,可以显示从服务器数据中心获取的数据。A participant's terminal may refer to a device used by a participant to participate in collaboration. In some embodiments, the participant's terminal may include a device for realizing the connection between the participant and the server data center and a data collection device. Data collection equipment is used to collect data such as audio and video in the actual space where the participants are located, such as panoramic cameras, ordinary cameras, AR glasses, mobile phones, motion sensors, depth cameras, etc. The participant terminal can also include a display device, which can display the data obtained from the server data center.
在一些实施例中,终端包括VR显像设备、AR显像设备、移动端手机、PC电脑端中的至少一种。例如,实现参与者与服务器数据中心连接的设备可以包括AR眼镜、VR头盔、PC、手机等。In some embodiments, the terminal includes at least one of a VR display device, an AR display device, a mobile phone, and a PC computer. For example, the equipment that realizes the connection between the participants and the server data center may include AR glasses, VR helmets, PCs, mobile phones, etc.
步骤320,通过预设3D坐标位置算法确定至少两个参与者在虚拟空间的位置信息。在一些实施例中,步骤320可以由定位模块220执行。
虚拟空间可以指展示虚拟物体的空间。虚拟空间可以基于实际空间的信息创建或基于预设的虚拟信息创建;关于创建虚拟空间的详细内容可以参见本说明书其他部分的描述,例如,图4。A virtual space may refer to a space where virtual objects are displayed. The virtual space can be created based on the information of the actual space or based on the preset virtual information; for details about creating the virtual space, refer to the description in other parts of this specification, for example, FIG. 4 .
在一些实施例中,虚拟空间可以对应不同的场景,例如,可以包括学术会议、教学培训、案件审讯、手术室VR场景、手术过程细节场景、病理资料分享场景、手术导航信息场景、病人生命体征数据场景等。在虚拟空间中,可以展示参与者的位置信息和数据信息。关于位置信息和数据信息的详细内容可以参见本说明书其他部分内容的介绍,例如,图3步骤340。In some embodiments, the virtual space can correspond to different scenarios, for example, it can include academic conferences, teaching and training, case interrogations, operating room VR scenarios, surgical procedure details scenarios, pathological data sharing scenarios, surgical navigation information scenarios, patient vital signs data scenarios, etc. In the virtual space, the location information and data information of the participants can be displayed. For details about the location information and data information, refer to the introduction of other parts of this specification, for example,
仅作为示例的,在手术过程细节场景中,手术医生可以佩戴终端设备并连接至服务器数据中心,可以将手术医生看到的实际空间的手术画面投射到虚拟空间实时直播给其他远端专家和学者,其他远程的专家和学者通过连接至服务器数据中心可以实时观看和了解现场的近距离手术细节,手术医生也可以与远程的专家音视频交流和获得远程指导。As an example only, in the detailed scene of the operation process, the surgeon can wear a terminal device and connect to the server data center, and can project the operation screen in the actual space seen by the surgeon to the virtual space and broadcast it to other remote experts and scholars in real time , other remote experts and scholars can watch and understand the details of close-range surgery in real time by connecting to the server data center, and surgeons can also communicate with remote experts in audio and video and obtain remote guidance.
再例如,在教学培训场景中,老师和学生可以通过参与者终端加入到虚拟空间,老师可以在虚拟空间中进行培训直播,并可以将三维模型、影像、文字等资料导入并分享到虚拟空间里,老师、学生可以在虚拟空间里走动并产生互动,亦可对分享的资料进行编辑、标记。For another example, in the teaching and training scene, teachers and students can join the virtual space through the participant terminal, and the teacher can conduct live training in the virtual space, and can import and share 3D models, images, text and other materials into the virtual space , teachers and students can walk and interact in the virtual space, and can also edit and mark the shared information.
在一些实施例中,可以在虚拟空间中设置空间坐标系,空间坐标系可以用于表示虚拟物体在虚拟空间的空间位置关系。多个参与者通过参与者终端可以在同一虚拟空间内进行交流与互动。In some embodiments, a spatial coordinate system can be set in the virtual space, and the spatial coordinate system can be used to represent the spatial position relationship of the virtual object in the virtual space. Multiple participants can communicate and interact in the same virtual space through the participant terminals.
在一些实施例中,虚拟物体可以包括空间背景、虚拟人物、虚拟窗口、画布、数据信息等。在一些实施例中,虚拟空间可以包括空间背景,该空间背景可以是实际空间的实时影像或其他预设图像。在一些实施例中,虚拟空间中可以包括与每个参与者对应的虚拟人物。关于虚拟人物的详细内容可以参见本说明书其他部分内容的介绍,例如,图4。In some embodiments, virtual objects may include space backgrounds, virtual characters, virtual windows, canvases, data information, and the like. In some embodiments, the virtual space may include a space background, which may be a real-time image of the actual space or other preset images. In some embodiments, virtual characters corresponding to each participant may be included in the virtual space. For details about the virtual character, refer to the introduction of other parts of this specification, for example, FIG. 4 .
在一些实施例中,虚拟空间可以包括多个第二窗口和/或多个第二空间。关于第二窗口和/或第二空间的详细内容可以参见本说明书其他部分内容的介绍,例如,图5。在一些实施例中,虚拟空间可以包括画布。关于画布的详细内容可以参见本说明书其他部分内容的介绍,例如,图8。In some embodiments, the virtual space may include multiple second windows and/or multiple second spaces. For details about the second window and/or the second space, refer to the introduction of other parts of this specification, for example, FIG. 5 . In some embodiments, the virtual space may include a canvas. For details about the canvas, please refer to the introduction of other parts of this specification, for example, Figure 8.
位置信息可以指与参与者在虚拟空间位置和/或动作有关的信息。位置信息可以包括参与者在虚拟空间中的初始位置信息和实时位置信息。初始位置信息可以指虚拟空间中各个参与者的初始位置。关于初始位置信息的详细内容可以参见本说明书其他部分内容的介绍,例如,图4。Location information may refer to information related to a participant's location and/or actions in a virtual space. The location information may include initial location information and real-time location information of the participant in the virtual space. The initial location information may refer to the initial location of each participant in the virtual space. For details about the initial location information, refer to the introduction of other parts of this specification, for example, FIG. 4 .
在一些实施例中,位置信息可以包括参与者对应的虚拟人物在虚拟空间的动作信息。动作信息可以指参与者在实际空间产生的身体动作信息。关于动作信息的详细内容可以参见本说明书其他部分内容的介绍。例如,图6。In some embodiments, the location information may include action information of the virtual character corresponding to the participant in the virtual space. The motion information may refer to body motion information generated by the participant in the actual space. For details about action information, please refer to the introduction of other parts of this manual. For example, Figure 6.
在一些实施例中,动作信息还可以包括与参与者的实际动作对应的,虚拟人物在虚拟空间的头部动作信息,可以以此来确定参与者在虚拟空间中的视角信息。关于视角信息的详细内容可以参见本说明书其他部分的描述,例如,图9。In some embodiments, the motion information may also include the head motion information of the avatar in the virtual space corresponding to the actual motion of the participant, which may be used to determine the viewing angle information of the participant in the virtual space. For details about viewing angle information, refer to descriptions in other parts of this specification, for example, FIG. 9 .
在一些实施例中,至少两个参与者在虚拟空间的位置信息与至少两个参与者在实际空间的位置数据相关,至少两个参与者在实际空间的位置数据通过至少两个参与者的终端获取。In some embodiments, the position information of the at least two participants in the virtual space is related to the position data of the at least two participants in the real space, and the position data of the at least two participants in the real space are passed through the terminals of the at least two participants Obtain.
实际空间可以指参与者实际所在的空间。例如,实际空间可以指参与者所在的办公室、书房、室外场所等。A physical space may refer to a space where a participant is actually located. For example, a physical space may refer to an office, a study, an outdoor location, etc. where a participant is located.
位置数据可以指与参与者在实际空间中的位置和/或动作有关的数据。在一些实施例中,位置数据可以包括参与者在实际空间中的位置和/或动作。其中,位置可以用实际空间中的坐标来表示。例如,坐标可以通过经度和纬度组成的坐标表示或基于其他预设坐标系的坐标信息表示。在一些实施例中,位置数据可以包括参与者的坐标位置、运动速度、加速度、身体部位的动作、参与者终端的方向(即参与者的朝向)等。位置数据可以包括实时位置数据。Location data may refer to data related to a participant's location and/or movement in real space. In some embodiments, location data may include a participant's location and/or motion in a physical space. Wherein, the position may be represented by coordinates in real space. For example, the coordinates may be represented by coordinates composed of longitude and latitude or coordinate information based on other preset coordinate systems. In some embodiments, the location data may include the participant's coordinate position, movement speed, acceleration, movement of body parts, direction of the participant's terminal (that is, the orientation of the participant), and the like. The location data may include real-time location data.
可以通过定位设备、参与者所在实际空间中的数据采集设备(例如,摄像头、传感器等)确定参与者的位置数据,并通过接收定位设备、数据采集设备等发送的数据确定参与者的位置数据。例如,基于接收到的定位设备的数据,可以确定用户的位置。再例如,可以通过摄像头和传感器来确定参与者的动作。可以通过与定位设备、参与者所在实际空间中的数据采集设备连接来获取位置数据。The participant's location data can be determined by the positioning device and the data collection device (eg, camera, sensor, etc.) For example, based on received data from the positioning device, the location of the user can be determined. As another example, the movements of participants can be determined through cameras and sensors. Location data can be obtained by connecting with a positioning device, a data acquisition device in the actual space where the participant is located.
示例性的定位设备可以包括全球定位系统(GPS)、全球导航卫星系统(GLONASS)、北斗导航系统、伽利略定位系统、准天顶卫星系统(QZSS)、基站定位系统、Wi-Fi定位系统。Exemplary positioning devices may include Global Positioning System (GPS), Global Navigation Satellite System (GLONASS), Beidou Navigation System, Galileo Positioning System, Quasi-Zenith Satellite System (QZSS), Base Station Positioning System, Wi-Fi Positioning System.
在一些实施例中,可以基于参与者在实际空间的位置数据确定参与者在虚拟空间的位置信息。例如,可以在服务器数据中心预设数据库,数据库中可以将参与者的位置数据与虚拟人物的位置信息相对应。该数据库中可以基于历史数据中位置数据和位置信息的对应关系设立。位置数据和位置信息的对应关系可以通过3D坐标位置算法确定。In some embodiments, the position information of the participant in the virtual space may be determined based on the position data of the participant in the real space. For example, a database can be preset in the server data center, and the location data of the participants can be corresponding to the location information of the avatar in the database. The database can be established based on the corresponding relationship between location data and location information in historical data. The corresponding relationship between the location data and the location information can be determined by a 3D coordinate location algorithm.
在一些实施例中,可以基于参与者在实际空间的位置数据对位置信息进行更新,实现参与者在实际空间和虚拟空间位置信息的同步。关于对位置信息进行更新的详细内容可以参见本说明书的其他部分的内容,例如,图4。In some embodiments, the position information can be updated based on the position data of the participants in the real space, so as to realize the synchronization of the position information of the participants in the real space and the virtual space. For details on updating the location information, refer to other parts of this specification, for example, FIG. 4 .
在一些实施例中,可以通过3D坐标位置算法将实际空间的下的位置数据(例如,3D坐标)经过投影变换矩阵转换成虚拟空间的位置信息。例如,可以将实际空间的坐标转换成虚拟空间坐标系中的坐标。In some embodiments, the location data (for example, 3D coordinates) in the actual space can be converted into location information in the virtual space through a projection transformation matrix by using a 3D coordinate location algorithm. For example, coordinates in the real space can be converted into coordinates in the virtual space coordinate system.
在一些实施例中,通过预设3D坐标位置算法确定至少两个参与者在虚拟空间的位置信息,包括:创建虚拟空间;在虚拟空间里创建与至少两个参与者中每个参与者对应的虚拟人物,其中,虚拟人物在虚拟空间具有初始位置信息;获取参与者在实际空间的所述位置数据,并将位置数据与对应的虚拟人物在虚拟空间的位置信息相关联;基于参与者的位置数据,获取参与者在实际空间的移动数据;通过预设3D坐标位置算法,基于移动数据对初始位置信息进行更新,确定更新后的位置信息。关于确定位置信息的详细内容,可以参见本说明书其他部分的内容,例如,图4。In some embodiments, determining the position information of the at least two participants in the virtual space through a preset 3D coordinate position algorithm includes: creating a virtual space; creating in the virtual space corresponding to each participant in the at least two participants A virtual character, wherein the virtual character has initial position information in the virtual space; acquiring the position data of the participant in the real space, and associating the position data with the corresponding position information of the virtual character in the virtual space; based on the position of the participant Data, to obtain the movement data of the participants in the actual space; through the preset 3D coordinate position algorithm, the initial position information is updated based on the movement data, and the updated position information is determined. For details about determining location information, refer to other parts of this specification, for example, FIG. 4 .
在一些实施例中,通过预设3D坐标位置算法,基于获取到的参与者在实际空间的位置数据,确定与参与者对应的所述虚拟人物在虚拟空间里的位置信息,包括:扫描参与者所在的实际空间,对参与者进行空间定位;对于完成扫描的参与者,确定参与者在实际空间的实时位置数据;基于实时位置数据确定参与者在实际空间的第一移动信息;确定虚拟人物在虚拟空间的初始位置信息;获取参与者在实际空间的第一动作信息;第一动作信息包括参与者的身体各个部位的子动作信息;通过预设3D坐标位置算法,基于第一移动信息和/或第一动作信息同步更新参与者对应的虚拟人物的第二移动信息和/或第二动作信息。进一步地,基于第二移动信息和/或第二动作信息关于确定位置信息。关于确定位置信息的详细内容,可以参见本说明书其他部分的内容,例如,图6。In some embodiments, the position information of the avatar corresponding to the participant in the virtual space is determined based on the acquired position data of the participant in the virtual space through a preset 3D coordinate position algorithm, including: scanning the participant The actual space where the participant is located is spatially positioned; for the participant who has completed the scan, the real-time location data of the participant in the actual space is determined; the first movement information of the participant in the actual space is determined based on the real-time location data; The initial position information of the virtual space; the first action information of the participant in the real space is obtained; the first action information includes the sub-action information of each part of the participant's body; through the preset 3D coordinate position algorithm, based on the first movement information and/or Or the first action information synchronously updates the second movement information and/or the second action information of the avatar corresponding to the participant. Further, the location information is determined based on the second movement information and/or the second action information. For details on determining location information, refer to content in other parts of this specification, for example, FIG. 6 .
步骤330,保存至少两个参与者上传的数据信息,并向至少两个参与者提供数据下载服务,其中,数据下载服务包括创建数据下载通道、提供下载资源中的至少一种。在一些实施例中,步骤330可以由下载模块230执行。
数据信息可以指在虚拟空间共享的信息。例如,数据信息可以包括3D模型、视频、文档、操作手册等。在一些实施例中,数据信息可以包括待标记内容和标记信息。关于待标记内容和标记信息的详细内容可以参见本说明书其他内容的描述,例如,图8。Data information may refer to information shared in a virtual space. For example, data information may include 3D models, videos, documents, operation manuals, etc. In some embodiments, the data information may include content to be marked and marking information. For details about the content to be marked and the marking information, refer to the description of other content in this specification, for example, FIG. 8 .
在一些实施例中,数据信息可以是参与者上传的数据,也可以是从其他平台(例如,网络云平台)调取的数据。数据信息可以保存在服务器数据中心的存储设备中。响应于参与者的数据请求,服务器数据中心可以与其他平台连接并调取对应的数据信息,也可以调取参与者上传至服务器数据中心的数据信息,还可以调取保存在服务器数据中心存储设备内的数据信息。In some embodiments, the data information may be data uploaded by participants, or data retrieved from other platforms (for example, a network cloud platform). The data information can be saved in the storage device of the server data center. In response to the data request of the participant, the server data center can connect with other platforms and retrieve the corresponding data information, or retrieve the data information uploaded by the participant to the server data center, or retrieve the storage device stored in the server data center data information within.
数据下载服务可以指通过通信模块(例如,LTE通信模块)连接相应的通信网络(例如,4G网络)对信息进行数据下载的服务。参与者可以通过数据下载服务获取数据信息。The data download service may refer to a service for data downloading of information by connecting a communication module (for example, an LTE communication module) to a corresponding communication network (for example, a 4G network). Participants can obtain data information through the data download service.
在一些实施例中,可以在服务器数据中心和参与者终端创建下载通道。下载通道可以为多个,分别与每一个参与者对应。参与者可以通过数据下载通道获取需要的信息数据。In some embodiments, a download channel can be created between the server data center and the participant terminal. There can be multiple download channels, corresponding to each participant. Participants can obtain the required information data through the data download channel.
在一些实施例中,数据信息可以分类(例如,按照数据类型)储存在服务器数据中心的存储设备中,每一个类型的数据信息对应一个数据下载通道,响应于参与者的数据请求的类型,可以从不同的数据下载通道获取对应的数据信息。In some embodiments, data information can be classified (for example, according to data type) and stored in the storage device of the server data center, each type of data information corresponds to a data download channel, and in response to the type of data request of the participant, it can be Obtain corresponding data information from different data download channels.
步骤340,在至少两个参与者的终端上同步展示数据信息。在一些实施例中,步骤340可以由展示模块240执行。
在一些实施例中,数据信息可以展示在虚拟空间中,可以将参与者终端与服务器数据中心建立连接获取数据信息,并将数据信息通过参与者终端的显示设备同步展示。In some embodiments, the data information can be displayed in the virtual space, and the participant terminal can be connected to the server data center to obtain the data information, and the data information can be displayed synchronously through the display device of the participant terminal.
在一些实施例中,可以根据不同的参与者终端确定不同的展示方式。例如,PC、手机的展示方式可以为通过PC、手机的屏幕来展示数据信息,AR眼镜和/或VR头盔的展示方式可以为通过AR眼镜和/或VR头盔内部投影的屏幕来展示数据信息。In some embodiments, different display modes may be determined according to different participant terminals. For example, PCs and mobile phones may display data information through the screens of PCs and mobile phones, and AR glasses and/or VR helmets may display data information through a screen projected inside AR glasses and/or VR helmets.
通过建立3D虚拟空间,并在3D虚拟空间同步共享数据信息,实现了本地与异地的同步,解决了线下多人会议的人数限制、场地限制等问题。参与者可以通过虚拟空间,以面对面的沟通方式,更直观对虚拟空间内的物体信息进行场景互动,参与方支持更多兼容平台,随时随地以不同的设备即可加入讨论,减少了大量的时间成本,可以高效且快速的形成协作团队。同时,通过虚拟空间,可以形成记录,便于后期其他人员学习参考、经验总结甚至调查取证。By establishing a 3D virtual space and synchronously sharing data information in the 3D virtual space, the synchronization between local and remote places is realized, and the problems of number limit and venue limit of offline multi-person meetings are solved. Participants can communicate face-to-face through the virtual space, and more intuitively interact with the object information in the virtual space. Participants support more compatible platforms, and can join the discussion with different devices anytime, anywhere, reducing a lot of time The cost can efficiently and quickly form a collaborative team. At the same time, through the virtual space, records can be formed, which is convenient for other personnel to learn and refer to, summarize experience and even investigate and collect evidence.
图4是根据本说明书一些实施例所示的确定参与者在虚拟空间的位置信息示例性方法流程图。在一些实施例中,流程400可以由定位模块220执行。Fig. 4 is a flowchart of an exemplary method for determining position information of a participant in a virtual space according to some embodiments of the present specification. In some embodiments, the
步骤410,创建虚拟空间;在虚拟空间里创建与所述至少两个参与者中每个参与者对应的虚拟人物,其中,虚拟人物在所述虚拟空间具有初始位置信息。
在一些实施例中,可以在任意实际空间内建立坐标系,基于该实际空间坐标系以及实际空间扫描数据创建实际空间模型的模型数据,并建立与实际空间模型对应的实际空间坐标系,根据所述实际空间模型的模型数据,通过映射建立与实际空间模型对应的虚拟空间模型,并建立与虚拟空间模型对应的虚拟空间坐标系。In some embodiments, a coordinate system can be established in any actual space, the model data of the actual space model can be created based on the actual space coordinate system and the actual space scan data, and the actual space coordinate system corresponding to the actual space model can be established, according to the actual space coordinate system Describe the model data of the actual space model, establish a virtual space model corresponding to the actual space model through mapping, and establish a virtual space coordinate system corresponding to the virtual space model.
在一些实施例中,虚拟空间可以是基于设计创建的,例如,虚拟空间可以是设计的虚拟手术室等。In some embodiments, the virtual space may be created based on the design, for example, the virtual space may be a designed virtual operating room or the like.
虚拟人物可以指在虚拟空间与参与者对应的人物形象。可以在参与者连入服务器数据中心时为参与者按照默认设置分配对应的虚拟人物,或向参与者提供已经创建好的多个候选虚拟人物,由参与者选择其中一个虚拟人物来确定与自己对应的虚拟人物。通过虚拟人物可以同步展示与之对应的参与者的位置信息。例如,参与者1点选了虚拟人物1,参与者在实际空间中左移,对应的虚拟人物1在虚拟空间中也进行左移。A virtual character may refer to a character image corresponding to a participant in a virtual space. When the participant connects to the server data center, the corresponding avatar can be assigned to the participant according to the default settings, or the participant can be provided with multiple candidate avatars that have been created, and the participant can choose one of the avatars to determine the corresponding avatar. of virtual characters. The location information of the corresponding participant can be displayed synchronously through the avatar. For example, if participant 1 clicks on virtual character 1, the participant moves to the left in the real space, and the corresponding virtual character 1 also moves to the left in the virtual space.
在一些实施例中,可以按照预设规则确定虚拟人物的初始位置信息。例如,每一个虚拟人物都预设有一个初始位置。当参与者选择好与其对应的虚拟人物,便可以确定其对应的初始位置,也可以由参与者自行选定其对应的虚拟人物在虚拟空间的初始位置。In some embodiments, the initial position information of the avatar may be determined according to preset rules. For example, each avatar is preset with an initial position. When the participant selects the corresponding virtual character, the corresponding initial position can be determined, or the participant can choose the initial position of the corresponding virtual character in the virtual space by himself.
步骤420,获取参与者在实际空间的位置数据,并将位置数据与对应的虚拟人物在虚拟空间的位置信息相关联。
在一些实施例中,可以通过与定位设备、数据采集设备连接获取参与者在实际空间的位置数据。In some embodiments, the location data of the participants in the actual space can be obtained by connecting with the positioning device and the data collection device.
在一些实施例中,在服务器数据中心中可以设置有每个虚拟人物的存储设备。当获取参与者的位置数据后可以存储在与该参与者对应的存储设备中,并且服务器数据中心可以将该位置数据通过预设3D坐标位置算法转换成虚拟人物的位置信息。In some embodiments, a storage device for each avatar may be set in the server data center. After the participant's location data is obtained, it can be stored in the storage device corresponding to the participant, and the server data center can convert the location data into the location information of the avatar through a preset 3D coordinate location algorithm.
步骤430,基于参与者的位置数据,获取参与者在实际空间的移动数据。
移动数据可以指与参与者在实际空间的移动有关的数据。移动数据可以包括参与者移动的方向和距离等。Movement data may refer to data related to the movement of participants in physical space. Movement data can include, among other things, the direction and distance a participant moves.
在一些实施例中,可以基于参与者移动的方向、移动前后的坐标点确定参与者移动数据。可以基于参与者移动前后的坐标以及距离公式确定移动的距离。例如,参与者的位置数据包括向左移动、移动前坐标为(1,2)、移动后坐标为(1,3),根据移动前后的坐标可以计算得到移动的距离为1米,则移动数据为向左移动1米。In some embodiments, the movement data of the participant can be determined based on the moving direction of the participant and the coordinate points before and after the movement. The distance moved can be determined based on the coordinates of the participant before and after the movement and a distance formula. For example, the participant's position data includes moving to the left, the coordinates before the movement are (1, 2), and the coordinates after the movement are (1, 3). According to the coordinates before and after the movement, the moving distance can be calculated as 1 meter, then the moving data To move 1 meter to the left.
步骤440,通过预设3D坐标位置算法,基于移动数据对初始位置信息进行更新,确定更新后的位置信息。
在一些实施例中,具体地,虚拟空间需要获取每个参与者的空间位置信息,参与者进入该虚拟空间时处于初始位置,当用户发生相对位移后,该移动数据通过参与者终端上传至服务器数据中心,服务器数据中心通过3D坐标位置算法,将移动数据经过投影变换矩阵转换成虚拟空间的移动信息,并对虚拟人物的位置进行更新,然后同步给其他参与者终端,其他参与者便可以在虚拟空间内看到该参与者的虚拟形象的实时移动。In some embodiments, specifically, the virtual space needs to obtain the spatial position information of each participant. When the participant enters the virtual space, he is in the initial position. After the user has a relative displacement, the mobile data is uploaded to the server through the participant terminal Data center, the server data center uses the 3D coordinate position algorithm to convert the mobile data into the mobile information of the virtual space through the projection transformation matrix, and updates the position of the virtual character, and then synchronizes it to other participants' terminals. The real-time movement of the participant's avatar is seen in the virtual space.
基于参与者在实际空间的位置数据对参与者在虚拟空间的虚拟人物的位置信息,可以为参与者提供更加逼真、全方位、多层次的渲染显示效果,营造出趋近于面对面交流的真实感,增加交流的有效性。Based on the position data of the participants in the actual space, the position information of the virtual characters of the participants in the virtual space can provide participants with more realistic, all-round, multi-level rendering and display effects, creating a sense of reality that is close to face-to-face communication , to increase the effectiveness of communication.
图5是根据本说明书一些实施例所示的基于XR的多人在线直播方法的示例性流程图。如图5所示,流程500可以包括以下步骤。Fig. 5 is an exemplary flow chart of an XR-based multiplayer online live broadcast method according to some embodiments of this specification. As shown in Fig. 5, the
步骤510,与至少两个参与者的终端建立通讯连接。在一些实施例中,步骤510可以由连接模块210执行。
关于参与者和终端的定义和说明,以及建立通讯连接的方法,可以参见图3及其相关描述。For the definition and description of participants and terminals, and the method for establishing a communication connection, please refer to FIG. 3 and related descriptions.
步骤520,创建虚拟空间,在虚拟空间里创建与至少两个参与者中每个参与者对应的虚拟人物。在一些实施例中,步骤520可以由定位模块220执行。
关于虚拟空间和虚拟人物的定义和说明,以及创建虚拟空间和虚拟人物的方法,可以参见图4及其相关描述。For the definition and description of the virtual space and the virtual character, as well as the method for creating the virtual space and the virtual character, please refer to FIG. 4 and its related description.
步骤530,通过预设3D坐标位置算法,基于获取到的参与者在实际空间的位置数据,确定与参与者对应的虚拟人物在虚拟空间里的位置信息。在一些实施例中,步骤530可以由定位模块220执行。Step 530: Determine the position information of the avatar corresponding to the participant in the virtual space based on the acquired position data of the participant in the real space through a preset 3D coordinate position algorithm. In some embodiments,
关于3D坐标位置算法、位置数据以及位置信息的定义和说明,可以参见图3及其相关描述。关于确定并实时更新位置信息的方法,可以参见图4及其相关描述。For the definition and description of the 3D coordinate location algorithm, location data, and location information, please refer to FIG. 3 and related descriptions. For the method of determining and updating location information in real time, refer to FIG. 4 and its related descriptions.
步骤540,在所述虚拟空间里基于虚拟人物的位置信息显示虚拟人物。在一些实施例中,步骤540可以由定位模块220执行。
在一些实施例中,可以根据虚拟人物的位置信息,在位置信息对应的坐标处显示对应的创建好的虚拟人物。当位置信息发生变化时,虚拟人物的显示随位置信息的变化而实时发生改变。关于虚拟人物的详细说明,可以参见图4及其相关描述。In some embodiments, according to the location information of the avatar, the corresponding created avatar may be displayed at the coordinates corresponding to the location information. When the position information changes, the display of the avatar changes in real time along with the change of the position information. For a detailed description of the avatar, refer to FIG. 4 and its related descriptions.
步骤550,获取参与者上传的共享数据,并在虚拟空间内展示共享数据。在一些实施例中,步骤550可以由下载模块230和展示模块240执行。
共享数据是指参与者上传到虚拟空间的数据。共享数据具有多种表现形式,例如视频、音频、图像和模型等,不同应用场景中的共享数据可以不同。Shared data refers to data uploaded by participants to the virtual space. Shared data has multiple manifestations, such as video, audio, image, and model, and the shared data in different application scenarios can be different.
例如,在展示手术室VR场景时,共享数据可以包括手术室的全景(如手术室的空间设计、位置朝向、仪器摆放等)数据。又例如,在展示手术细节时,共享数据可以包括手术过程的近景(如医生手部操作、仪器操作、病人手术部位等)数据。又例如,在进行病理资料分享时,共享数据可以包括病人三维影像模型、病理图片和视频等。又例如,在展示手术导航信息时,共享数据可以包括手术机器人的画面(如手术规划画面)等。又例如,在展示病人生命体征数据时,共享数据可以包括病人手术过程中生命体征监测数据(如生命体征,血压,心率,心电图,血氧饱和度等)。又例如,在展示远端专家视频画面时,共享数据可以包括摄像头拍摄的专家的视频数据、音频数据等。又例如,在互动场景中,共享数据可以包括模型操控、空间标注、群聊留言板和私聊对话框等内容。For example, when displaying the VR scene of the operating room, the shared data may include the panoramic view of the operating room (such as the spatial design of the operating room, location orientation, instrument placement, etc.) data. For another example, when presenting details of an operation, the shared data may include close-up data of the operation process (such as doctor's hand operation, instrument operation, patient's operation site, etc.). For another example, when sharing pathological data, the shared data may include patient 3D image models, pathological pictures and videos, and so on. For another example, when displaying surgical navigation information, the shared data may include a picture of a surgical robot (such as a picture of surgery planning), and the like. For another example, when displaying patient vital sign data, the shared data may include vital sign monitoring data (such as vital signs, blood pressure, heart rate, electrocardiogram, blood oxygen saturation, etc.) during the patient's operation. For another example, when displaying a video image of a remote expert, the shared data may include video data, audio data, etc. of the expert captured by a camera. For another example, in interactive scenarios, shared data can include model manipulation, spatial annotation, group chat message boards, and private chat dialog boxes.
在一些实施例中,步骤550还包括在虚拟空间创建至少一个第二空间和/或第二窗口,其中,至少一个第二空间和/或第二窗口中的每一个与一个参与者相对应;通过第二空间和/或第二窗口展示对应的参与者的共享数据。In some embodiments, step 550 further includes creating at least one second space and/or second window in the virtual space, wherein each of the at least one second space and/or second window corresponds to a participant; The shared data of the corresponding participant is displayed through the second space and/or the second window.
第二空间和/或第二窗口是指在虚拟空间创建的用于展示共享数据的空间和/或窗口。在一些实施例中,第二空间和/或第二窗口可以是仅对应的参与者可见的窗口,例如,两个参与者之间的私聊窗口。在一些实施例中,第二空间和/或第二窗口可以由系统预设排版,也可以由参与者自行拖拽移动位置。The second space and/or second window refers to a space and/or window created in the virtual space for displaying shared data. In some embodiments, the second space and/or the second window may be a window visible only to corresponding participants, for example, a private chat window between two participants. In some embodiments, the second space and/or the second window can be preset by the system, and can also be dragged and moved by the participants themselves.
在一些实施例中,参与者可以根据需要创建第二空间和/或第二窗口,例如,参与者可以在终端的创建界面选择创建第二空间和/或第二窗口。在一些实施例中,第二空间和/或第二窗口也可以由系统默认创建。In some embodiments, the participant can create the second space and/or the second window as needed, for example, the participant can choose to create the second space and/or the second window on the creation interface of the terminal. In some embodiments, the second space and/or the second window may also be created by the system by default.
在一些实施例中,不同的第二空间和/或第二窗口可以对应不同的参与者,并展示不同的共享数据。在一些实施例中,第二空间和/或第二窗口可以通过动态分配实现与参与者的一一对应,例如,当参与者进入虚拟空间时,系统自动为参与者创建对应的第二空间和/或第二窗口。又例如,参与者自行创建的第二空间和/或第二窗口与本人相对应。In some embodiments, different second spaces and/or second windows may correspond to different participants and display different shared data. In some embodiments, the second space and/or the second window can realize one-to-one correspondence with the participants through dynamic allocation. For example, when the participant enters the virtual space, the system automatically creates a corresponding second space and the second window for the participant. /or a second window. For another example, the second space and/or the second window created by the participant corresponds to the participant himself.
在一些实施例中,参与者可以将终端接收或存储的待共享的数据上传到系统的服务器中,其他参与者可以根据需求从服务器中下载共享数据。关于共享数据的方法的详细内容,可以参见图3及其相关描述。In some embodiments, a participant can upload the data to be shared received or stored by the terminal to the server of the system, and other participants can download the shared data from the server as required. For details about the method for sharing data, refer to FIG. 3 and its related descriptions.
基于XR的多人在线直播方法可以实现参与者在虚拟空间中的第一人称视角沉浸式互动操作,增加参与者的学习兴趣和技能掌握熟练程度,解决实际空间场地、人数限制而无法达到最佳指导效果等问题,并且可以在虚拟空间中直观展示参与者共享的数据,便于参与者之间信息同步,从而提高讨论、指导等效率及效果。The XR-based multi-person online live broadcast method can realize the immersive interactive operation of the participants in the first-person perspective in the virtual space, increase the participants' learning interest and skill mastering proficiency, and solve the problem that the actual space, venue and number of people are limited and cannot achieve the best guidance Effects and other issues, and the data shared by participants can be visually displayed in the virtual space, which facilitates information synchronization among participants, thereby improving the efficiency and effectiveness of discussions and guidance.
图6是根据本说明书一些实施例所示的位置信息实时更新的示例性流程图。在一些实施例中,流程600可以由定位模块220执行。Fig. 6 is an exemplary flowchart of real-time updating of location information according to some embodiments of the present specification. In some embodiments, the
在一些实施例中,参与者终端可以扫描参与者所在的实际空间,对参与者进行空间定位;对于完成扫描的所述参与者,参与者终端可以确定参与者在实际空间的实时位置数据610。In some embodiments, the participant terminal can scan the actual space where the participant is located to locate the participant; for the participant who has completed the scanning, the participant terminal can determine the real-time position data 610 of the participant in the actual space.
在一些实施例中,参与者终端可以基于多种方式扫描参与者所在的实际空间,例如,参与者可以手持终端,利用终端的深度摄像机扫描实际空间周围环境。In some embodiments, the participant's terminal can scan the actual space where the participant is located based on multiple methods. For example, the participant can hold the terminal and use the terminal's depth camera to scan the surrounding environment of the actual space.
又如,参与者终端可以获取实际空间锚点,对实际空间的特殊平面进行多点位的空间扫描,保持实际空间的锚点定位成功,即空间定位成功;若扫描失败,则提醒空间扫描未完成。For another example, the participant terminal can obtain the anchor point of the actual space, perform multi-point space scanning on the special plane of the actual space, and keep the anchor point positioning of the actual space successful, that is, the space positioning is successful; if the scanning fails, it will remind that the space scanning is not Finish.
在一些实施例中,参与者终端可以基于多种方式确定参与者在实际空间的实时位置数据610。例如,参与者终端可以在实际空间扫描完成后绘制空间轮廓,根据参与者与空间参照物的相对位置信息确定参与者的实时位置数据610。又例如,参与者终端还可以直接根据GPS等定位方法获取参与者的实时位置数据610。In some embodiments, the participant terminal can determine the real-time location data 610 of the participant in the actual space based on various methods. For example, the participant terminal may draw the spatial outline after scanning the actual space, and determine the real-time position data 610 of the participant according to the relative position information of the participant and the spatial reference object. For another example, the participant terminal can also directly obtain the real-time location data 610 of the participant according to positioning methods such as GPS.
在一些实施例中,参与者终端可以基于实时位置数据610确定参与者在实际空间的第一移动信息620。In some embodiments, the participant terminal may determine the
第一移动信息620是指参与者在实际空间移动而产生的信息。第一移动信息可以包括参与者在实际空间的方位、距离、高度等的移动信息。The
在一些实施例中,第一移动信息620可以通过多种方式获得,例如,第一移动信息可以基于参与者在实际空间的实时位置数据确定,当参与者的实时位置数据发生变化,参与者终端可以根据变化的数据计算对应的第一移动信息。In some embodiments, the
又如,可以通过参与者在实际空间的锚点定位确定,即,可以通过锚点的移动信息确定。关于获取第一移动信息的更多内容,可以参见图4及其相关描述。For another example, it may be determined by the anchor point positioning of the participant in the actual space, that is, it may be determined by the movement information of the anchor point. For more content about acquiring the first movement information, refer to FIG. 4 and related descriptions.
在一些实施例中,服务器可以确定虚拟人物在虚拟空间的初始位置信息660。In some embodiments, the server may determine
例如,在扫描参与者所在的实际空间后,参与者终端可以确定当前参与者在实际空间的位置数据,服务器可以获取该位置数据并将其映射为虚拟人物在虚拟空间的初始位置信息。又例如,初始位置信息可以由服务器预设。关于确定虚拟人物在虚拟空间的初始位置信息的更多说明,可以参见图4及其相关描述。For example, after scanning the actual space where the participant is located, the participant terminal can determine the current participant's location data in the actual space, and the server can obtain the location data and map it to the initial location information of the avatar in the virtual space. For another example, the initial location information may be preset by the server. For more description on determining the initial position information of the avatar in the virtual space, please refer to FIG. 4 and related descriptions.
在一些实施例中,参与者终端可以获取参与者在实际空间的第一动作信息630;第一动作信息630包括所述参与者的身体各个部位的子动作信息,In some embodiments, the participant terminal can obtain the first action information 630 of the participant in the real space; the first action information 630 includes sub-action information of various parts of the participant's body,
第一动作信息630是指参与者在实际空间产生的身体动作信息。第一动作信息630还可以包括肢体动作信息(例如,伸展双臂、摇晃身体、走路、下蹲)、面部表情信息(例如眨眼、张嘴)等。在一些实施例中,第一动作信息630包括所述参与者的身体各个部位的子动作信息。The first motion information 630 refers to body motion information generated by the participant in the actual space. The first action information 630 may also include body action information (for example, stretching arms, shaking the body, walking, squatting), facial expression information (for example, blinking, opening the mouth) and the like. In some embodiments, the first action information 630 includes sub-action information of various body parts of the participant.
子动作信息是指参与者的身体各个部位的具体动作信息,例如,跑步动作中的腿部动作信息和手臂动作信息。在一些实施例中,参与者终端可以将参与者的动作划分为多个身体部位的子动作,进而获取各个身体部位的子动作信息。The sub-action information refers to specific action information of each part of the participant's body, for example, leg action information and arm action information in a running action. In some embodiments, the participant terminal may divide the participant's actions into sub-actions of multiple body parts, and then acquire sub-action information of each body part.
在一些实施例中,当参与者进行某一动作时,至少一个身体部位会参与或构成该动作,例如,当参与者进行跑步动作时,参与者的双脚、腿部、手臂等部位都会产生相应的动作,其中,双脚和腿部可以作为跑步动作中的核心部位。针对不同的场景和不同的动作,参与者的核心部位可以不同。关于不同场景和核心部位的更多描述可以参见图6及其相关描述。In some embodiments, when a participant performs a movement, at least one body part participates in or constitutes the movement. For example, when a participant performs a running movement, the participant's feet, legs, arms, etc. Corresponding actions, wherein the feet and legs can be used as the core parts of the running action. For different scenes and different actions, the core parts of the participants can be different. For more descriptions of different scenarios and core parts, please refer to Figure 6 and its related descriptions.
在一些实施例中,第一动作信息和子动作信息可以通过多种方式获得。例如,可以通过摄像头、穿戴设备、传感器等设备获取。具体的,摄像头可以捕捉参与者的实时图像信息,对实时图像处理可以得到参与者身体各部位的变化进而得到第一动作信息和子动作信息;穿戴设备可以在对应的关节活动部位固连位移传感器、角度传感器等元件,用于获取参与者身体各部位的变化信息并转化为第一动作信息和子动作信息。In some embodiments, the first action information and the sub-action information can be obtained in multiple ways. For example, it can be obtained through cameras, wearable devices, sensors and other devices. Specifically, the camera can capture the real-time image information of the participant, and the real-time image processing can obtain the changes of various parts of the participant's body, and then obtain the first action information and sub-action information; the wearable device can be connected with the displacement sensor, Components such as angle sensors are used to obtain change information of various parts of the participant's body and convert it into first action information and sub-action information.
在一些实施例中,参与者终端可以通过预设3D坐标位置算法,基于第一移动信息620和/或所述第一动作信息630同步更新参与者对应的虚拟人物的第二移动信息670和/或第二动作信息680。In some embodiments, the participant terminal can synchronously update the
在一些实施例中,基于第一移动信息和/或所述第一动作信息同步更新参与者对应的虚拟人物的第二移动信息和/或第二动作信息可以包括基于第一移动信息更新第二移动信息、基于第一移动信息更新第二移动信息和第二动作信息、基于第一动作信息更新第二动作信息、基于第一动作信息更新第二移动信息和第二动作信息、基于第一移动信息和第一动作信息更新第二移动信息和第二动作信息等情况。In some embodiments, synchronously updating the second movement information and/or the second movement information of the avatar corresponding to the participant based on the first movement information and/or the first movement information may include updating the second movement information based on the first movement information. Movement information, updating second movement information and second action information based on first movement information, updating second action information based on first action information, updating second movement information and second action information based on first movement information, updating second movement information and second action information based on first movement information, information and the first action information update the second movement information and the second action information and so on.
在一些实施例中,参与者终端可以通过多种方法基于第一移动信息和/或所述第一动作信息同步更新参与者对应的虚拟人物的第二移动信息和/或第二动作信息。例如,参与者终端可以通过实时扫描实际空间,获取参与者的第一移动信息和/或第一动作信息并传输给服务器,再通过预设3D坐标位置算法做坐标转换,将参与者的第一移动信息和/或第一动作信息与虚拟人物的数据合并,从而得到对应的虚拟人物的第二移动信息和/或第二动作信息。In some embodiments, the participant terminal may synchronously update the second movement information and/or the second movement information of the virtual character corresponding to the participant based on the first movement information and/or the first movement information through various methods. For example, the participant's terminal can scan the actual space in real time to obtain the participant's first movement information and/or first action information and transmit it to the server, and then perform coordinate conversion through the preset 3D coordinate position algorithm to convert the participant's first The movement information and/or the first action information are combined with the data of the avatar to obtain the second movement information and/or the second action information of the corresponding avatar.
例如,参与者终端获取参与者在实际空间内向正前方移动两米的距离的第一移动信息,以及以70厘米步幅步行移动并伴随双臂下垂15°摆动的第一动作信息,则可以通过预设3D坐标位置算法做坐标转换,将上述数据与虚拟人物合并,得到虚拟人物在虚拟空间内,也向正前方移动两米的距离的第二移动信息,以及以70厘米步幅步行移动并伴随双臂下垂15°摆动的第二动作信息。For example, if the participant's terminal obtains the first movement information of the participant moving a distance of two meters straight ahead in the actual space, and the first movement information of walking with a 70-centimeter stride and swinging with both arms drooping at 15°, it can pass The preset 3D coordinate position algorithm performs coordinate conversion, merges the above data with the avatar, and obtains the second movement information that the avatar also moves two meters straight ahead in the virtual space, and moves on foot at a pace of 70 centimeters. The second action information accompanied by the 15° swing of the arms drooping.
在一些实施例中,服务器可以基于当前场景,判断参与者的至少一个核心身体部位;基于至少一个核心身体部位确定参与者的身体各个部位的子动作信息的展示优先级640;基于子动作信息的展示优先级确定动作信息的展示参数650,展示参数650包括展示频率、展示精度;基于展示参数同步与参与者对应的虚拟人物的第二动作信息680。In some embodiments, the server can determine at least one core body part of the participant based on the current scene; determine the presentation priority 640 of the sub-action information of each part of the participant's body based on the at least one core body part; The display priority determines the display parameters 650 of the action information, and the display parameters 650 include display frequency and display precision; based on the display parameters, the
当前场景是指当前虚拟空间内的场景,例如学术会议、远程会诊等,关于场景的更多描述可以参见图1及其相关内容。The current scene refers to the scene in the current virtual space, such as an academic conference, a remote consultation, etc. For more descriptions of the scene, please refer to Figure 1 and its related contents.
核心身体部位是指对参与者的动作最重要的身体部位。例如,在手术指导场景中,医生进行手术操作时的核心部位可以是手部;又例如,在审讯过程中,被审讯人员的核心部位可以是面部。Core body parts are those body parts that are most important to the participant's movements. For example, in the surgical guidance scenario, the core part of the doctor's operation can be the hand; another example, during the interrogation process, the core part of the interrogated person can be the face.
在一些实施例中,核心身体部位可以通过多种方式确定,例如可以预设不同场景不同阶段对应的核心身体部位的对照表,基于预设对照表确定当前场景中的核心身体部位。又如,核心身体部位也可以依据部位的持续移动时间来确定,对于持续移动时间较长的部位可以认为其承担参与者当前的主要动作,是核心身体部位。In some embodiments, the core body parts can be determined in multiple ways, for example, a comparison table of core body parts corresponding to different stages in different scenes can be preset, and the core body parts in the current scene can be determined based on the preset comparison table. For another example, the core body part can also be determined according to the continuous movement time of the part. For a part with a long continuous movement time, it can be considered that it undertakes the current main action of the participant and is the core body part.
展示优先级640是指参与者的身体各个部位的子动作信息的展示优先级。展示优先级640可以通过排序或等级表示,例如,可以以1-10的数值反映展示优先级排序先后,数值越小,排序越靠前,表明对应的子动作信息要在前展示。又例如,1-10的数值也可以反映展示优先级的级别高低,数值越大表示级别越高,对应的子动作信息要在前展示。The presentation priority 640 refers to the presentation priority of the sub-action information of each part of the participant's body. The display priority 640 can be expressed by sorting or level. For example, the numerical value of 1-10 can be used to reflect the order of the display priority. The smaller the numerical value, the higher the ranking, indicating that the corresponding sub-action information should be displayed first. For another example, the numerical value of 1-10 may also reflect the level of display priority. The larger the numerical value, the higher the level, and the corresponding sub-action information shall be displayed first.
在一些实施例中,服务器可以预设不同场景中各个部位的展示优先级,在实际应用时可以基于预设的展示优先级对照表对子动作信息进行展示。例如,可以预设在手术直播场景中,手部子动作信息的展示优先级最高,手臂次之,腿部最低,那么在实际手术直播时,便可以基于预设优先级对照表中的上述信息对医生的动作进行展示。在一些实施例中,还可以基于场景信息和身体各个部位的动作信息确定展示优先级。关于确定展示优先级的详细内容,可以参见图7及其描述。In some embodiments, the server may preset display priorities of various parts in different scenes, and may display sub-action information based on a preset display priority comparison table in practical applications. For example, it can be preset that in the surgical live broadcast scene, the display priority of the hand sub-motion information is the highest, followed by the arm, and the leg is the lowest. Then in the actual live broadcast of the operation, it can be based on the above information in the preset priority comparison table Show the doctor's actions. In some embodiments, the presentation priority may also be determined based on scene information and action information of various parts of the body. For details on determining the presentation priority, refer to FIG. 7 and its description.
展示参数650是指和子动作展示相关的参数,例如,展示参数可以包括动作的展示频率和展示精度等。The display parameters 650 refer to parameters related to the display of sub-actions. For example, the display parameters may include display frequency and display accuracy of an action.
展示频率是指子动作的更新频率。例如,可以设定展示频率的范围以30-60赫兹为低展示频率,60-90赫兹为中展示频率,90-120赫兹为高展示频率。在一些实施例中,展示频率可以是固定的数值选项,也可以在展示频率范围内自由改变。在一些实施例中,展示频率可以由服务器预设,还可以基于子动作信息的展示优先级确定,如展示优先级越大,展示频率越高。The display frequency refers to the update frequency of the sub-action. For example, the display frequency range can be set to 30-60 Hz as the low display frequency, 60-90 Hz as the middle display frequency, and 90-120 Hz as the high display frequency. In some embodiments, the display frequency may be a fixed numerical option, or may be freely changed within the display frequency range. In some embodiments, the display frequency can be preset by the server, and can also be determined based on the display priority of the sub-action information, for example, the higher the display priority, the higher the display frequency.
展示精度是指子动作的展示精度,展示精度可以用像素来表示。例如,可以以1280*720的像素作为流畅展示精度,1920*1080的像素作为标准展示精度,2560*1440及以上的像素作为高清展示精度。在一些实施例中,展示精度可以由服务器预设,还可以基于子动作信息的展示优先级确定,例如,展示优先级越大,展示精度越高。The display accuracy refers to the display accuracy of the sub-action, and the display accuracy can be expressed in pixels. For example, 1280*720 pixels can be used as smooth display precision, 1920*1080 pixels can be used as standard display precision, and 2560*1440 and above pixels can be used as high-definition display precision. In some embodiments, the display accuracy can be preset by the server, and can also be determined based on the display priority of the sub-action information, for example, the higher the display priority, the higher the display accuracy.
在一些实施例中,展示参数可以基于子动作信息的展示优先级确定,对于优先级较高的子动作,可以以较高的频率和精度进行展示。例如,在手术中,对于医生手部动作变化采用较高的展示频率和展示精度来展示,对于其他部位的动作如摇晃身体,可以以较低的展示频率展示;又例如,在审讯过程中,被审讯人员的面部表情变化可以采用较高的展示频率和展示精度。In some embodiments, the display parameters may be determined based on display priorities of sub-action information, and sub-actions with higher priority may be displayed with higher frequency and precision. For example, in an operation, the change of the doctor's hand movements can be displayed with a higher display frequency and display accuracy, and the movements of other parts such as shaking the body can be displayed with a lower display frequency; another example, during the interrogation process, The facial expression changes of the person being interrogated can be displayed at a higher display frequency and display accuracy.
在一些实施例中,服务器可以预设不同展示优先级对应的参数表,例如预设第一展示优先级对应的展示频率为120hz,展示精度为2560*1440像素,第二展示优先级对应的展示频率为90hz,展示精度为1920*1080像素。在一些实施例中,展示参数也可以由参与者自行设定。In some embodiments, the server can preset parameter tables corresponding to different display priorities. For example, the preset display frequency corresponding to the first display priority is 120hz, the display accuracy is 2560*1440 pixels, and the display frequency corresponding to the second display priority is The frequency is 90hz, and the display accuracy is 1920*1080 pixels. In some embodiments, the presentation parameters can also be set by the participants themselves.
在一些实施例中,基于子动作信息的展示优先级确定参与者的各个部位的展示参数后,服务器可以获取各个部位的展示参数数据,进而根据不同部位的展示参数对虚拟人物的第二动作信息进行同步更新。例如,根据不同部位的展示频率(如对于同一参与者,其手部展示频率120hz,腿部展示频率为60hz)采集并更新对应部位的第二动作信息;又例如,根据不同部位的展示精度(如对于上述参与者,其手部展示精度为2560*1440像素,腿部展示精度为1280*720像素)对第二动作信息进行展示。In some embodiments, after determining the display parameters of each part of the participant based on the display priority of the sub-action information, the server can obtain the display parameter data of each part, and then perform the second action information of the avatar according to the display parameters of different parts. Make a sync update. For example, according to the display frequency of different parts (for example, for the same participant, the display frequency of its hands is 120hz, and the display frequency of legs is 60hz) to collect and update the second action information of the corresponding part; and for example, according to the display accuracy of different parts ( For example, for the above-mentioned participants, the display accuracy of their hands is 2560*1440 pixels, and the display accuracy of their legs is 1280*720 pixels) to display the second action information.
在一些实施例中,可以通过预设3D坐标位置算法,基于第二移动信息更新位置信息,具体说明可以参见图4及其相关内容。In some embodiments, the location information may be updated based on the second movement information by using a preset 3D coordinate location algorithm. For specific description, please refer to FIG. 4 and related contents.
根据展示优先级确定展示参数来同步第二动作信息,对于较重要的动作变化采用高频率、高精度的展示,而对于不重要的动作变化采用较低的频率和精度展示,可以在保证动作展示效果的同时有效节约服务器资源。Determine the display parameters according to the display priority to synchronize the second action information, use high-frequency, high-precision display for more important action changes, and use low frequency and low-precision display for unimportant action changes, which can ensure action display Effectively save server resources at the same time.
根据图6所示的位置更新方法,可以将参与者在实际空间的动作信息和移动信息实时准确地映射在虚拟空间内,使得远程的参与者可以实时观看和了解作业人员在实际空间的操作细节,从而提供实时的指导信息,避免不必要的动作造成的干扰,并且提升参与者的沉浸式体验。According to the position update method shown in Figure 6, the action information and movement information of the participants in the real space can be accurately mapped in the virtual space in real time, so that the remote participants can watch and understand the operation details of the operators in the real space in real time , so as to provide real-time guidance information, avoid interference caused by unnecessary actions, and enhance the immersive experience of participants.
图7是根据本说明书一些实施例所述的确定子动作信息的展示优先级的示例性示意图。在一些实施例中,流程700可以由定位模块220执行。Fig. 7 is an exemplary diagram of determining display priorities of sub-action information according to some embodiments of the present specification. In some embodiments, the
在一些实施例中,子动作信息的展示优先级可以基于处理模型实现。In some embodiments, the presentation priority of sub-action information may be implemented based on a processing model.
在一些实施例中,处理模型可以用于确定子动作信息的展示优先级。处理模型可以为机器学习模型,例如,处理模型可以包括卷积神经网络模型(Convolutional NeuralNetworks,CNN)、深度神经网络模型(Deep Neural Networks,DNN)。In some embodiments, a processing model may be used to determine the presentation priority of sub-action information. The processing model may be a machine learning model, for example, the processing model may include a convolutional neural network model (Convolutional Neural Networks, CNN) or a deep neural network model (Deep Neural Networks, DNN).
步骤710,可以基于动作图像通过卷积神经网络模型确定身体各个部位的动作轨迹和身体各个部位的动作特征向量。
动作图像可以指参与者各个部位的子动作的影像。通过参与者所在实际空间的数据采集设备来获取动作信息。例如,动作图像可以是通过全景相机拍摄的参与者甲的动作视频或图片。Action images may refer to images of sub-actions of various parts of a participant. The action information is obtained through the data acquisition equipment in the actual space where the participants are located. For example, the action image may be an action video or picture of participant A captured by a panoramic camera.
在一些实施例中,卷积神经网络模型可以用于对至少一个动作图像处理确定至少一个与动作图像对应的动作轨迹和动作特征向量。In some embodiments, the convolutional neural network model may be used to process at least one motion image to determine at least one motion trajectory and motion feature vector corresponding to the motion image.
动作轨迹可以指参与者身体部位的运动轨迹。运动轨迹可以用相应的身体部位在连续的时间点的位置坐标序列或矩阵等表示,其中每个序列或矩阵元素可以表示身体上一个部位的中心位置在对应时刻的位置坐标。例如,动作轨迹序列可以为((1,0),(1,1),(1,2)),其中,(1,0)、(1,1)、(1,2)分别为参与者甲的右手在连续三个时间点的位置坐标。A motion trajectory may refer to a movement trajectory of a participant's body part. The motion trajectory can be represented by a sequence or matrix of position coordinates of corresponding body parts at continuous time points, etc., where each sequence or matrix element can represent the position coordinates of the center position of a part of the body at the corresponding moment. For example, the action trajectory sequence can be ((1,0), (1,1), (1,2)), where (1,0), (1,1), (1,2) are the participants The position coordinates of A's right hand at three consecutive time points.
运动特征向量可以指身体各个部位的动作的特征向量。运动特征向量的元素可以包括部位名称、部位动作在各个场景的重要程度以及部位发生动作的频繁程度等。基于动作图像获取的部位即部位名称可以为多个。各个部位动作可以根据场景的不同预设不同的重要程度。部位发生动作的频繁程度可以通过预设时间段内发生动作的次数表示。例如,培训课程里,教师的手指动作和面部动作都可以设置较高的重要程度。仅作为示例的,运动特征向量可以为(1,40,3),其中,1可以表示手部,40可以表示手部对当前场景的重要程度,3可以表示手部发生动作的次数为3次。Motion feature vectors may refer to feature vectors of actions of various parts of the body. The elements of the motion feature vector may include the name of the part, the importance of the action of the part in each scene, the frequency of the action of the part, and the like. There may be a plurality of part names that are parts acquired based on motion images. The actions of each part can be preset with different degrees of importance according to different scenes. The frequency of movements of the parts can be represented by the number of times of movements within a preset time period. For example, in a training course, the teacher's finger movements and facial movements can all be set with higher importance. As an example only, the motion feature vector can be (1,40,3), where 1 can represent the hand, 40 can represent the importance of the hand to the current scene, and 3 can represent the number of times the hand has moved 3 times .
在一些实施例中,深度神经网络模型可以用于对动作轨迹、动作特征向量以及场景信息进行处理确定子动作信息的展示优先级。In some embodiments, the deep neural network model can be used to process the action trajectory, action feature vector and scene information to determine the presentation priority of the sub-action information.
场景信息可以基于多种形式表示,例如,可以通过向量表示。根据预设场景和数字和/或字母的预设关系,场景信息中的元素可以对应于一种场景。例如,场景向量(1)中1可以表示培训场景。Scene information can be represented based on various forms, for example, it can be represented by vectors. According to the preset relationship between preset scenes and numbers and/or letters, elements in the scene information may correspond to a scene. For example, 1 in scene vector (1) may represent a training scene.
步骤720,可以基于场景信息、身体各个部位的动作轨迹和身体各个部位的动作特征向量,通过深度神经网络模型确定子动作信息的展示优先级。
在一些实施例中,深度神经网络模型可以用于对至少一个与动作图像对应的动作轨迹、动作特征向量、场景信息进行处理确定子动作信息的展示优先级。关于子动作信息的展示优先级的详细内容可以参见本说明书其他部分的描述,例如,图6。In some embodiments, the deep neural network model may be used to process at least one action track, action feature vector, and scene information corresponding to the action image to determine the presentation priority of the sub-action information. For details about display priorities of sub-action information, refer to descriptions in other parts of this specification, for example, FIG. 6 .
在一些实施例中,处理模型可以通过卷积神经网络模型和深度神经网络模型联合训练得到。例如,向初始卷积神经网络模型输入训练样本,即历史动作图像,得到至少一个与历史动作图像对应的历史动作轨迹、历史动作特征向量;然后将初始卷积神经网络模型的输出及历史动作图像对应的历史场景信息作为初始深度神经网络模型的输入。训练过程中,基于训练样本的标签和初始深度神经网络模型的输出结果建立损失函数,并基于损失函数同时迭代更新初始卷积神经网络模型和初始深度神经网络模型的参数,直到预设条件被满足训练完成。训练完成后处理模型中卷积神经网络模型和深度神经网络模型的参数也可以确定。In some embodiments, the processing model can be obtained through joint training of a convolutional neural network model and a deep neural network model. For example, input training samples to the initial convolutional neural network model, that is, historical action images, and obtain at least one historical action track and historical action feature vector corresponding to the historical action images; then the output of the initial convolutional neural network model and the historical action images The corresponding historical scene information is used as the input of the initial deep neural network model. During the training process, a loss function is established based on the labels of the training samples and the output of the initial deep neural network model, and the parameters of the initial convolutional neural network model and the initial deep neural network model are iteratively updated based on the loss function until the preset conditions are met Training is complete. After the training is completed, the parameters of the convolutional neural network model and the deep neural network model in the processing model can also be determined.
在一些实施例中,训练样本可以基于数据采集设备采集的历史动作图像和与其对应的历史场景信息获取。训练样本的标签可以是对应的子动作信息的历史展示优先级。标签可以人工标注。In some embodiments, the training samples can be obtained based on the historical action images collected by the data collection device and the corresponding historical scene information. The label of the training sample may be the historical presentation priority of the corresponding sub-action information. Labels can be manually annotated.
通过机器学习模型确定子动作信息的展示优先级,可以提高确定展示优先级的速度,还可以提高展示优先级的精确度。Determining the display priority of the sub-action information through the machine learning model can increase the speed of determining the display priority, and can also improve the accuracy of the display priority.
在一些实施例中,子动作信息的展示优先级可以通过向量数据库实现。具体的,可以基于场景信息、参与者的身体各个部位的子动作信息构建场景动作向量,然后,基于场景动作向量在向量数据库检索参考向量,将参考向量对应的子动作信息的展示优先级作为本次的优先级。In some embodiments, the presentation priority of the sub-action information can be implemented through a vector database. Specifically, the scene action vector can be constructed based on the scene information and the sub-action information of each part of the participant's body, and then, based on the scene action vector, the reference vector is retrieved in the vector database, and the display priority of the sub-action information corresponding to the reference vector is used as this second priority.
子动作信息可以通过子动作信息向量表示。子动作信息向量中的元素可表示身体部位名称和对应的动作。不同的动作可以基于不同的数字或字母表示。例如,子动作信息向量为(1,2),其中,1表示手部,2表示手部动作为握拳。在一些实施例中,可以将场景信息、子动作信息合并来确定场景动作向量。场景动作向量可以是多维向量。例如,场景动作向量(a,b)中a可以表示会诊场景,b可以表示子动作信息向量。Sub-action information can be represented by a sub-action information vector. Elements in the sub-action information vector may represent body part names and corresponding actions. Different actions can be represented based on different numbers or letters. For example, the sub-action information vector is (1, 2), where 1 indicates a hand, and 2 indicates that the hand action is a fist. In some embodiments, scene information and sub-action information may be combined to determine a scene action vector. Scene action vectors may be multi-dimensional vectors. For example, a in the scene action vector (a, b) may represent a consultation scene, and b may represent a sub-action information vector.
在一些实施中,场景动作向量可以通过嵌入层获取。嵌入层可以为机器学习模型,例如,嵌入层可以是循环神经网络模型(Recurrent Neural Network,RNN)等。嵌入层的输入可以是场景信息、参与者的身体各个部位的子动作信息,输出可以是场景动作向量。In some implementations, scene motion vectors can be obtained through an embedding layer. The embedding layer may be a machine learning model, for example, the embedding layer may be a recurrent neural network model (Recurrent Neural Network, RNN) or the like. The input of the embedding layer can be the scene information, the sub-action information of each part of the participant's body, and the output can be the scene action vector.
向量数据库可以指包含历史场景动作向量的数据库。在一些实施例中,预设数据库中包括历史场景动作向量和与其对应的子动作信息的展示优先级。A vector database may refer to a database containing historical scene action vectors. In some embodiments, the preset database includes historical scene action vectors and presentation priorities of sub-action information corresponding thereto.
参考向量可以指与场景动作向量相似度超过预设阈值的历史场景动作向量。例如,预设阈值为80%,向量数据库中历史场景动作向量1与场景动作向量相似度为90%,则历史场景动作向量1为参考向量。在一些实施例中,参考向量可以是与场景动作向量相似度最高的历史场景动作向量。The reference vector may refer to a historical scene action vector whose similarity with the scene action vector exceeds a preset threshold. For example, the preset threshold is 80%, and the similarity between the historical scene action vector 1 and the scene action vector in the vector database is 90%, then the historical scene action vector 1 is the reference vector. In some embodiments, the reference vector may be a historical scene action vector with the highest similarity to the scene action vector.
参考向量可以指与场景动作向量相似度可以基于场景动作向量和历史场景动作向量之间的向量距离来确定。向量距离可以包括曼哈顿距离、欧式距离、切比雪夫距离、余弦距离、马氏距离等。可以根据不同的距离类型所对应的公式,代入数值进行数学计算。The reference vector may refer to the similarity with the scene action vector and may be determined based on the vector distance between the scene action vector and the historical scene action vector. Vector distances may include Manhattan distance, Euclidean distance, Chebyshev distance, cosine distance, Mahalanobis distance, etc. According to the formulas corresponding to different distance types, values can be substituted for mathematical calculations.
在一些实施例中,嵌入层可以与深度神经网络模型联合训练获得。向初始嵌入层输入训练样本,得到场景动作向量;然后将初始嵌入层的输出作为初始深度神经网络模型的输入。训练过程中,基于标签和初始深度神经网络模型的输出结果建立损失函数,并基于损失函数同时迭代更新初始嵌入层和初始深度神经网络模型的参数,直到预设条件被满足训练完成。训练完成后嵌入层和深度神经网络模型的参数也可以确定。In some embodiments, the embedding layer can be jointly trained with the deep neural network model. Input training samples to the initial embedding layer to obtain scene action vectors; then use the output of the initial embedding layer as the input of the initial deep neural network model. During the training process, a loss function is established based on the label and the output of the initial deep neural network model, and the parameters of the initial embedding layer and the initial deep neural network model are iteratively updated based on the loss function at the same time, until the preset conditions are met and the training is completed. After the training is completed, the parameters of the embedding layer and the deep neural network model can also be determined.
在一些实施例中,训练样本可以是历史场景信息、参与者的身体各个部位的历史子动作信息。训练样本的标签可以是对应的子动作信息的历史展示优先级。标签可以人工标注。In some embodiments, the training samples may be historical scene information, historical sub-action information of various parts of the participant's body. The label of the training sample may be the historical presentation priority of the corresponding sub-action information. Labels can be manually annotated.
通过上述训练方式获得嵌入层的参数,在一些情况下有利于解决单独训练嵌入层时难以获得标签的问题,还可以使嵌入层能较好地得到反映场景信息和子动作信息的场景运动向量。Obtaining the parameters of the embedding layer through the above training method is helpful in some cases to solve the problem that it is difficult to obtain labels when training the embedding layer alone, and it can also enable the embedding layer to better obtain scene motion vectors that reflect scene information and sub-action information.
基于历史数据预设向量数据库,进而确定子动作信息的展示优先级,可以使确定得到的展示优先级更符合实际情况。Presetting the vector database based on the historical data, and then determining the display priority of the sub-action information can make the determined display priority more in line with the actual situation.
图8是根据本说明书一些实施例所示的用于XR的数据处理方法的示例性流程800的流程示意图。流程800可以由展示模块240和生成模块250执行。Fig. 8 is a schematic flowchart of an
步骤810,响应于标注请求者的请求,在虚拟空间内创建画布。
标注请求者是指提出标注请求的参与者。An annotation requester refers to a participant who makes an annotation request.
在一些实施例中,标注请求者可以在终端发送标注请求并被服务器接收。In some embodiments, the labeling requester may send the labeling request at the terminal and be received by the server.
画布是指虚拟场景中展示待标记内容的画布。画布可以有多种形式,例如,三维画布等。The canvas refers to the canvas in the virtual scene that displays the content to be marked. Canvas can have various forms, for example, three-dimensional canvas and so on.
在一些实施例中,服务器接收到标注请求者的请求后,可以在虚拟空间中创建默认画布。在一些实施例中,标注请求者可以通过手动操作或预设选项改变画布的尺寸和形状,在一些实施例中,标注请求者也可以根据需要拖拽画布进行移动。In some embodiments, the server may create a default canvas in the virtual space after receiving the request from the label requester. In some embodiments, the annotation requester can change the size and shape of the canvas through manual operations or preset options, and in some embodiments, the annotation requester can drag the canvas to move as needed.
步骤820,在画布上展示待标记内容,其中,待标记内容为已被标注过的数据和/或未标注过的原始数据。Step 820, displaying the content to be marked on the canvas, wherein the content to be marked is data that has been marked and/or original data that has not been marked.
在一些实施例中,待标记内容可以来源于多种数据。例如,待标记内容可以是参与者事先准备的数据。又如,待标记内容还可以包括和场景对应的共享数据,例如,在进行病理资料分享时的病人三维影像模型、病理图片和视频等。不同场景下的共享数据不同,待标记内容也不同,关于不同场景下的待标记内容的更多说明可以参考图5的相关内容。In some embodiments, the content to be marked can come from various data. For example, the content to be marked may be data prepared by the participant in advance. For another example, the content to be marked may also include shared data corresponding to the scene, for example, the patient's 3D image model, pathological pictures and videos when sharing pathological data. The shared data in different scenarios is different, and the content to be marked is also different. For more descriptions on the content to be marked in different scenarios, please refer to the relevant content in FIG. 5 .
被标注过的数据是指存在历史标注的数据。在一些实施例中,对于被标注过的数据可以继续进行标注。在一些实施例中,在二次标注数据时,可以选择是否显示历史标注情况。Labeled data refers to data with historical labels. In some embodiments, labeling can continue on the labeled data. In some embodiments, when labeling data for the second time, it is possible to choose whether to display historical labeling situations.
未标注过的原始数据是指不存在历史标注的数据,例如,参与者从服务器中下载的原始数据,直播中的实时数据等。Unlabeled raw data refers to data that does not have historical annotations, for example, raw data downloaded from the server by participants, real-time data in live broadcasts, etc.
在一些实施例中,待标记内容为在所述标注请求者的终端上的多个窗口中任意窗口、任意位置展示的内容。In some embodiments, the content to be marked is the content displayed in any window and any position of the multiple windows on the terminal of the marking requester.
在一些实施例中,对于不同的待标记内容可以采用不同的展示方式。例如,图片可以采用静态展示,视频可以采用视频源动态展示。在一些实施例中,对于不容的终端也可以采用不同的展示方式,例如,VR设备可以通过双眼不同的画面和适当的瞳距进行3D化显示,移动端设备可以采用屏幕进行展示,电脑端可以通过显示器进行展示等。In some embodiments, different presentation methods may be adopted for different content to be marked. For example, pictures can be displayed statically, and videos can be displayed dynamically using video sources. In some embodiments, different display methods can also be used for incompatible terminals. For example, VR devices can perform 3D display through different images of the eyes and an appropriate interpupillary distance, mobile terminal devices can use screens for display, and computer terminals can display display through a display, etc.
步骤830,获取标注请求者运用射线交互系统在画布上创建的标记信息,其中,标记信息包括标记内容及标记路径。
标记信息是指对待标记内容进行标注而产生的信息。在一些实施例中,标记信息还可以包括标记时间、标记时间对应的标记位置信息等。例如,标记时间为上午九点,以及此时标记在虚拟空间坐标(20,30,40)的位置处。Marking information refers to the information generated by marking the content to be marked. In some embodiments, the marker information may also include marker time, marker position information corresponding to the marker time, and the like. For example, the marking time is nine o'clock in the morning, and the marking is at the position of coordinates (20, 30, 40) in the virtual space at this time.
标记内容是指对待标记内容进行标注的具体内容,标记内容可以包括画笔绘制的内容、插入的图片、调整大小的操作等。The marked content refers to the specific content to be marked. The marked content may include content drawn with a brush, inserted pictures, resizing operations, and the like.
标记路径是指标记的笔画路径。例如,标注请求者在画布上标记了“人”字,则标记路径为该“人”字的撇和捺。The marker path refers to the stroke path of the marker. For example, if the labeling requester marks the character "人" on the canvas, the marking path is the left and right sides of the character "人".
射线交互系统是指用于进行标记的系统。在一些实施例中,通过射线交互系统,参与者可以将射线指向画布展示的待标记内容,并通过触摸、按压等手势操作进行标记。The ray interactive system refers to the system used for marking. In some embodiments, through the ray interaction system, the participant can point the ray to the content to be marked displayed on the canvas, and mark by touching, pressing and other gesture operations.
在一些实施例中,终端可以实时自动保存标注请求者的标注信息到本地。在一些实施例中,标注请求者可以通过触摸画布、点击按键等操作主动选择保存标注信息。In some embodiments, the terminal may automatically save the tagging information of the tagging requester locally in real time. In some embodiments, the annotation requester can actively choose to save the annotation information by touching the canvas, clicking a button, and other operations.
步骤840,将待标记内容及标记信息分享至其他参与者的终端进行展示。
在一些实施例中,基于终端可以采集对应参与者的标记信息并上传到服务器,再由服务器发送给其他参与者终端,并在其他参与者终端创建展示窗口,从而实现将待标记内容及标记信息分享至其他参与者的终端进行展示。In some embodiments, the terminal can collect the marking information of the corresponding participant and upload it to the server, and then the server sends it to other participant terminals, and creates a display window on other participant terminals, so as to realize the content to be marked and the marking information Share to the terminals of other participants for display.
在一些实施例中,还可以基于展示设置,将待标记内容及其标记信息分享至其他参与者的终端进行展示,基于展示设置进行展示可以实现展示的个性化,满足参与者的需求,具体可以参见图9及其相关内容。In some embodiments, based on the display settings, the content to be marked and its marking information can be shared with the terminals of other participants for display. Displaying based on the display settings can realize the personalization of the display and meet the needs of the participants. Specifically, it can be See Figure 9 and its related contents.
图8所示的用于XR的数据处理方法,可以实现对共享数据进行实时标注,同时,参与者还可在画布上进行插图、更改画笔颜色、调整大小以及撤销、清空等操作,并将操作结果保存至本地,便于日后进行参考、总结和比对。此外,还可以将标注信息分享给其他参与者,便于参与者之间进行复杂问题的讨论。The data processing method for XR shown in Figure 8 can realize real-time annotation of shared data. The results are saved locally for future reference, summary and comparison. In addition, the annotation information can also be shared with other participants to facilitate discussions on complex issues among participants.
图9是根据本说明书一些实施例所示的待标记内容展示的示例性流程900的示意图。流程900可以展示模块240执行。Fig. 9 is a schematic diagram of an
步骤910,获取标注请求者的展示设置,展示设置包括实时标记展示和标记完成后展示。
展示设置是指与展示待标记内容及标记信息相关的设置。展示设置还可以包括展示窗口的3D位置信息、展示画面的尺寸、色彩、精度等信息。The display settings refer to the settings related to displaying the content to be marked and marking information. The display setting may also include 3D position information of the display window, size, color, precision and other information of the display screen.
实时标记展示是指将标记请求者进行标记的过程实时同步给其他参与者,即其他参与者可以看到标记信息的创建过程。标记完成后展示是指将标记完成后的最终结果分享给其他参与者,即,其他参与者可以获取到完成标记后的结果,但不能看到标记信息的创建过程。Real-time marking display refers to synchronizing the marking process of the marking requester to other participants in real time, that is, other participants can see the creation process of marking information. Display after marking refers to sharing the final result of marking with other participants, that is, other participants can obtain the marking result, but cannot see the creation process of marking information.
在一些实施例中,展示设置可以由终端默认确定,例如终端默认标记完成后展示。在一些实施例中,展示设置也可以由参与者对终端展示设置窗口的选项的选择来确定,例如,参与者可以通过点击、触摸等操作,勾选实时标记展示的选项。在一些实施例中,终端可以记录参与者的展示设置,并将设置数据传送给服务器。In some embodiments, the display setting may be determined by default by the terminal, for example, the terminal defaults to display after completion of marking. In some embodiments, the display setting can also be determined by the participant's selection of options in the terminal display setting window, for example, the participant can check the option of real-time marker display by clicking, touching and other operations. In some embodiments, the terminal can record the participant's presentation settings and transmit the setting data to the server.
步骤920,基于展示设置,将待标记内容及其标记信息分享至其他参与者的终端进行展示。
在一些实施例中,服务器可以根据展示设置,将待标记内容及其标记信息分享至其他参与者的终端进行展示。例如,服务器可以获取参与者选择的展示设置为标记展示,还可以获取参与者的待标记内容及其标记信息,进而可以根据展示设置,将待标记内容及其标记信息的实时数据传送给其他参与者的终端进行实时展示。In some embodiments, the server may share the content to be marked and its marking information to terminals of other participants for display according to the display settings. For example, the server can obtain the display setting selected by the participant as a marked display, and can also obtain the content to be marked and its marking information of the participant, and then transmit the real-time data of the content to be marked and its marking information to other participants according to the display settings real-time display on the terminal of the user.
在一些实施例中,针对不同的场景,展示的情况可能不同。例如,在手术指导时,为避免对手术过程造成影响,对待标记内容及其标记信息的展示需要避开病人手术部位、设备显示屏等位置。又例如,在培训教学过程中,为保证每个参与者都能清楚看到待标记内容及其标记信息,可以在每位参与者面前创建展示窗口;而在学术讲座场景中,可以只创建一个大型展示窗口进行展示。In some embodiments, the displayed situation may be different for different scenarios. For example, during surgical guidance, in order to avoid affecting the surgical process, the display of the content to be marked and its marked information needs to avoid the patient's surgical site and equipment display screen. For another example, in the training and teaching process, in order to ensure that each participant can clearly see the content to be marked and its marking information, a display window can be created in front of each participant; while in the academic lecture scene, only one Large display window for display.
在一些实施例中,将待标记内容及其标记信息分享至其他参与者的终端进行展示,包括:基于其他参与者中每个参与者的位置信息确定每个参与者的视角信息;基于每个参与者的视角信息,确定每个参与者的展示内容,并进行展示,展示内容包括在所述视角信息下的待标记内容和/或标记信息。In some embodiments, sharing the content to be marked and its marking information to terminals of other participants for presentation includes: determining the viewing angle information of each participant based on the position information of each participant in other participants; The viewing angle information of the participants is used to determine and display the display content of each participant. The display content includes the content to be marked and/or the marking information under the viewing angle information.
视角信息是指参与者相对待标记内容及其标记信息的视角信息。视角信息可以包括方位、角度、高度、距离等信息。参与者所处位置不同,对应的视角信息也不同。例如,对于同一展示模型,位于展示模型右前方的参与者的视角信息主要包含展示模型的部分右视图信息以及部分正视图信息;位于展示模型左上方的参与者的视角信息主要包含展示模型的部分左视图信息以及部分俯视图信息。The angle of view information refers to the angle of view information of the participant relative to the content to be marked and its marking information. The angle of view information may include information such as azimuth, angle, height, and distance. The position of the participants is different, and the corresponding perspective information is also different. For example, for the same display model, the viewing angle information of the participant at the front right of the display model mainly includes part of the right view information and part of the front view information of the display model; the viewing angle information of the participant at the upper left of the display model mainly includes the part of the display model Left view information and partial top view information.
在一些实施例中,服务器可以基于终端获取参与者的位置信息,与待标记内容及其标记信息的展示位置信息进行比对,确定参与者和展示位置之间的相对位置,从而确定视角信息。例如,可以在虚拟空间构建三维空间坐标(x,y,z),在展示模型等三维图像时,参与者面向y方向站立,其位置坐标为(1,1,1),展示模型的位置坐标(如中心坐标)为(1,2,2),则展示模型和参与者之间的相对位置可以基于两个坐标之间的计算得到,该视角下参与者可以看到模型的部分正视图以及模型底部的信息,服务器可以通过算法确定具体的视角信息。又例如,在上述示例中存在另一参与者的位置坐标为(1,0,1),则该参与者对应的视角信息中,与展示模型的距离要大于位置坐标为(1,1,1)的参与者与展示模型的距离,即对应的该参与者看到的模型比例小于位置坐标为(1,1,1)的参与者所看到的模型比例。In some embodiments, the server can compare the position information of the participant acquired by the terminal with the display position information of the content to be marked and its marking information, and determine the relative position between the participant and the display position, thereby determining the angle of view information. For example, three-dimensional space coordinates (x, y, z) can be constructed in the virtual space. When displaying a three-dimensional image such as a model, the participant stands facing the y direction, and its position coordinates are (1, 1, 1), and the position coordinates of the display model (such as center coordinates) is (1, 2, 2), then the relative position between the display model and the participants can be obtained based on the calculation between the two coordinates, the participants can see the partial front view of the model and For the information at the bottom of the model, the server can determine the specific viewing angle information through an algorithm. For another example, in the above example, there is another participant whose position coordinates are (1, 0, 1), then in the viewing angle information corresponding to this participant, the distance from the display model is greater than the position coordinates of (1, 1, 1 ) between the participant and the displayed model, that is, the proportion of the model seen by the corresponding participant is smaller than the proportion of the model seen by the participant whose position coordinates are (1, 1, 1).
在一些实施例中,可以基于每个参与者的视角信息计算得到该视角下参与者可以看到的展示内容,并对该内容进行展示。例如,根据参与者的视角信息得到该参与者位于展示的3D模型的正右方,即确定该参与者可以看到模型的右视图并将右视图对该参与者进行展示。在一些实施例中,参与者可以看到的内容还与距离有关,例如,对于距离较远的参与者,其对应的展示内容的比例可能小于距离较近的参与者的比例。In some embodiments, based on the view angle information of each participant, the display content that the participant can see under the view angle can be calculated, and the content can be displayed. For example, according to the perspective information of the participant, it is obtained that the participant is located on the right side of the displayed 3D model, that is, it is determined that the participant can see the right view of the model and the right view is displayed to the participant. In some embodiments, the content that a participant can see is also related to the distance. For example, for a participant who is far away, the proportion of the corresponding displayed content may be smaller than that of a participant who is closer.
根据展示设置将待标记内容及其标记信息展示给其他参与者,可以便于参与者对复杂问题的讨论,对不同的待标记内容及其标记信息采用不同的展示设置,以达到最佳展示效果,进而提高讨论和指导的效果;并且不同的展示设置可以实现展示的个性化,满足参与者的需求。同时,根据参与者的视角信息确定展示内容,可以提供更加逼真、全方位、多层次的渲染展示效果,增强参与者的沉浸式体验。Displaying the content to be marked and its marking information to other participants according to the display settings can facilitate participants' discussions on complex issues, and adopt different display settings for different content to be marked and their marking information to achieve the best display effect. In turn, the effect of discussion and guidance can be improved; and different display settings can realize the personalization of the display to meet the needs of the participants. At the same time, determining the display content according to the perspective information of the participants can provide a more realistic, all-round, multi-level rendering display effect and enhance the immersive experience of the participants.
图10是根据本说明书一些实施例所示的确定预测展示内容的示例性示意流程1000的示意图。在一些实施例中,流程1000可以由展示模块240执行。Fig. 10 is a schematic diagram of an exemplary
步骤1010,预测所述参与者的未来运动轨迹。
未来运动轨迹是指当前时间之后,参与者的运动轨迹。运动轨迹可以包括时间信息以及对应的位置信息等。在一些实施例中,未来运动轨迹对应的时间段的时间长度可以根据需求设定。在一些实施例中,未来运动轨迹对应的时间段可以是从当前时间开始的时间段,也可以是和当前时间有一定间隔的时间段。例如,未来运动轨迹可以是未来5秒内的运动轨迹,也可以是未来10分钟内的运动轨迹。又例如,未来运动轨迹可以是从当前时间开始5秒内的运动轨迹,也可以是与当前时间间隔5分钟后的10分钟内(即当前时间算起的5-15分钟内)的运动轨迹。The future trajectory refers to the trajectory of the participant after the current time. The motion trajectory may include time information and corresponding location information. In some embodiments, the time length of the time period corresponding to the future motion trajectory can be set according to requirements. In some embodiments, the time period corresponding to the future motion trajectory may be a time period starting from the current time, or a time period with a certain interval from the current time. For example, the future movement trajectory may be a movement trajectory within the next 5 seconds, or a movement trajectory within the next 10 minutes. For another example, the future movement track may be a movement track within 5 seconds from the current time, or a movement track within 10 minutes after a 5-minute interval from the current time (that is, within 5-15 minutes from the current time).
在一些实施例中,可以基于场景和参与者的当前子动作信息预测参与者的未来运动轨迹。例如,在手术直播场景中,医生拿起手术缝合包中的器械或者缝合线,则可以预测接下来医生要对手术部位进行缝合。在一些实施例中,参与者的未来运动轨迹还可以基于历史场景和历史进入场景的时间进行预测。例如,对于同一主题的教学场景,历史教学场景中教学人员在进入场景的30分钟时展示示范动作,则可以预测在当前的教学场景中,教学人员在30分钟时会展示示范动作。又例如,对于同一主题的手术指导场景,历史手术指导场景中,被指导人员在进入场景的10分钟时会集中到手术床旁观看指导,则可以预测在当前手术指导场景中,被指导人员会在10分钟时移动到病床周围。In some embodiments, the future trajectory of the participant can be predicted based on the scene and the current sub-action information of the participant. For example, in the scene of live surgery, if the doctor picks up the instruments or sutures in the surgical suture bag, it can be predicted that the doctor will suture the surgical site next. In some embodiments, the future trajectory of the participants can also be predicted based on the historical scenes and the time when the participants entered the scenes. For example, for the teaching scene of the same topic, if the teaching staff showed demonstration actions 30 minutes into the scene in the historical teaching scene, it can be predicted that in the current teaching scene, the teaching staff will show demonstration actions at 30 minutes. For another example, for the surgical guidance scene of the same theme, in the historical surgical guidance scene, the guided personnel will gather at the operating bedside to watch the guidance after 10 minutes of entering the scene, then it can be predicted that in the current surgical guidance scene, the guided personnel will Move around the bed at 10 minutes.
在一些实施例中,所述预测所述参与者的未来运动轨迹可以基于预测模型实现;预测模型的结构为循环神经网络模型;预测模型的输入为截至到当前时间的预设历史时间段,参与者的定位数据序列;预测模型的输出为预设未来时间段的预测位置数据序列。In some embodiments, the prediction of the future trajectory of the participant can be realized based on a prediction model; the structure of the prediction model is a recurrent neural network model; the input of the prediction model is a preset historical time period up to the current time, and participants The positioning data sequence of the person; the output of the prediction model is the predicted position data sequence of the preset future time period.
预设历史时间段是指截止到当前时间的时间段。在一些实施例中,预设历史时间段可以是从参与者进入场景到当前时间的时间段,例如,九点参与者进入虚拟场景,当前时间为十点,则九点到十点即预设历史时间段。在一些实施例中,预设历史时间段还可以是从某一动作的开端到当前时间的时间段,例如,参与者进行一个下蹲动作,从参与者开始下蹲的时刻到当前时间即为预设历史时间段。The preset historical time period refers to the time period up to the current time. In some embodiments, the preset historical time period may be the time period from when the participant enters the scene to the current time. For example, if the participant enters the virtual scene at nine o'clock and the current time is ten o'clock, then nine to ten o'clock is the preset time period. historical time period. In some embodiments, the preset historical time period can also be the time period from the beginning of a certain action to the current time. For example, if the participant performs a squat action, the time period from the moment the participant starts squatting to the current time is Preset historical time period.
在一些实施例中,一个预设历史时间段还可以包含多个时间点信息。在一些实施例中,时间点信息可以为单个时间点的信息。例如,时间点的采集间隔为1秒,预设历史时间段为过去的2分钟,则该预设历史时间段包含120个时间点信息。又例如,时间点的采集间隔为1分钟,预设历史时间段为过去的1小时,则该预设历史时间段包含60个时间点信息。In some embodiments, a preset historical time period may also include multiple time point information. In some embodiments, the time point information may be information of a single time point. For example, if the time point collection interval is 1 second, and the preset historical time period is the past 2 minutes, then the preset historical time period includes 120 time point information. For another example, if the time point collection interval is 1 minute, and the preset historical time period is the past 1 hour, then the preset historical time period includes 60 time point information.
在一些实施例中,一个时间点信息还对应为一个子时间段的信息。例如,预设历史时间段为九点到十点,该时间段还可以分为三个子时间段,如九点到九点二十五分为第一子时间段,九点二十五分到九点四十五分为第二子时间段,九点四十五分到十点为第三子时间段,即该预设时间段包含三个时间点信息。在一些实施例中,时间点的采集间隔和子时间段的长度可以由服务器预设,也可以由参与者自行设定。In some embodiments, one time point information also corresponds to one sub-time period information. For example, the preset historical time period is from 9:00 to 10:00, and this time period can be divided into three sub-time periods. For example, from 9:00 to 9:25 is divided into the first sub-time period, 9:45 is divided into the second sub-time period, and 9:45 to 10:00 is the third sub-time period, that is, the preset time period includes three time point information. In some embodiments, the time point collection interval and the length of the sub-time period can be preset by the server, or can be set by the participants themselves.
定位数据序列是指参与者在虚拟空间的定位数据序列。定位数据序列可以反映参与者在预设历史时间段内的移动情况。定位数据序列中的每个元素值对应一个时间点时参与者的位置数据。例如,在一组序列((1,1,1),(2,2,2),(1,2,1),(1,2,2))中,若每个坐标元素值对应间隔一秒的单个时间点,则(1,2,1)表明在预设时间段内第三秒时参与者在虚拟空间内的位置数据。又例如,在上述序列中,若每个坐标元素值对应的时间点是一个子时间段,则(1,2,1)表明在预设时间段内第三个子时间段对应的参与者在虚拟空间内的位置数据。The positioning data sequence refers to the positioning data sequence of the participant in the virtual space. Positioning data sequences can reflect the movement of participants within a preset historical time period. Each element value in the positioning data sequence corresponds to the participant's position data at a point in time. For example, in a set of sequences ((1, 1, 1), (2, 2, 2), (1, 2, 1), (1, 2, 2)), if each coordinate element value corresponds to an interval of one Second, then (1, 2, 1) indicates the position data of the participant in the virtual space at the third second in the preset time period. For another example, in the above sequence, if the time point corresponding to each coordinate element value is a sub-time period, then (1, 2, 1) indicates that the participant corresponding to the third sub-time period within the preset time period is in the virtual Location data within a space.
在一些实施例中,预测模型的训练数据可以是多组带标签的训练样本,训练样本可以是历史截至到当前时间的预设历史时间段,参与者的历史定位数据序列。训练样本可以来源于服务器存储的历史数据。预测模型的训练样本的标签可以为历史预设未来时间段时参与者的实际位置数据序列,标签可以通过人工标注的方式获得。In some embodiments, the training data of the prediction model may be multiple groups of labeled training samples, and the training samples may be a preset historical time period up to the current time, and a sequence of historical positioning data of participants. The training samples may come from historical data stored on the server. The label of the training sample of the prediction model can be used to preset the actual location data sequence of the participants in the future time period for the history, and the label can be obtained by manual labeling.
在一些实施例中,通过标签和初始的预测模型的结果对应构建损失函数,基于损失函数通过梯度下降或其他方法迭代更新预测模型的参数。当满足预设条件时模型训练完成,得到训练好的预测模型。其中,预设条件可以是损失函数收敛、迭代的次数达到阈值等。In some embodiments, a loss function is constructed by corresponding labels and initial prediction model results, and parameters of the prediction model are iteratively updated by gradient descent or other methods based on the loss function. When the preset conditions are met, the model training is completed, and the trained prediction model is obtained. Wherein, the preset condition may be that the loss function converges, the number of iterations reaches a threshold, and the like.
步骤1020,基于未来运动轨迹,确定未来时间点的视角信息。
未来时间点是指当前时间之后的某一时刻。未来时间点包含在未来运动轨迹对应的时间段内。A future time point refers to a certain moment after the current time. The future time point is included in the time period corresponding to the future motion trajectory.
视角信息是指参与者相对待标记内容及其标记信息的视角信息。关于视角信息的更多内容可以参见图9及其相关描述。The angle of view information refers to the angle of view information of the participant relative to the content to be marked and its marking information. For more information about the angle of view information, please refer to FIG. 9 and related descriptions.
在一些实施例中,基于参与者的未来运动轨迹,可以确定未来时间点时参与者的位置信息,根据参与者在未来时间点的位置信息与待标记内容及其标记信息的展示窗口的位置信息进行比对,可以确定参与者和展示窗口之间的相对位置,从而确定视角信息。关于如何确定视角信息的更多描述,可以参见图9相关内容。In some embodiments, based on the future trajectory of the participant, the location information of the participant at the future time point can be determined, according to the location information of the participant at the future time point and the location information of the display window of the content to be marked and its marking information By comparison, the relative position between the participant and the display window can be determined, thereby determining the viewing angle information. For more description on how to determine the viewing angle information, please refer to related content in FIG. 9 .
步骤1030,基于未来时间点的视角信息,确定对应的预测展示内容。
在一些实施例中,预测展示内容可以包括虚拟场景中的待标记内容、标记后的内容及其标记信息等。在一些实施例中,当预测展示内容确定时,则可以基于每个参与者的未来时间点的视角信息,通过计算得到该视角下参与者可以看到的预测展示内容。例如,当预测展示内容为标记后的心脏的三维模型及其标记信息时,根据未来时间点的视角信息可知待标记内容及其标记信息的预测展示窗口位于预测的相应未来时间点时参与者的位置的右前方30°,则可以预测该参与者对应的预测展示内容为该角度下未来时间点时标记后的内容及其标记信息的侧面透视图。In some embodiments, the predicted display content may include the content to be marked in the virtual scene, the marked content and its marking information, and the like. In some embodiments, when the predicted display content is determined, based on the perspective information of each participant at a future time point, the predicted display content that the participant can see under the perspective can be obtained through calculation. For example, when the predicted display content is the marked three-dimensional model of the heart and its marking information, according to the viewing angle information of the future time point, it can be known that the predicted display window of the content to be marked and its marking information is located at the corresponding predicted future time point. If the position is 30° to the front right of the position, it can be predicted that the predicted display content corresponding to the participant is a side perspective view of the marked content and its marked information at a future time point at this angle.
在一些实施例中,当预测展示内容实时变化时,可以基于每个参与者的未来时间点的视角信息对未来时间点的预测展示内容做预先采集的准备。例如,虚拟场景中的待标记内容为医生的手术操作,未来时间点对应的医生的手术操作为胸部手术操作,则可以使参与者视角信息的对应位置的相机提前进入胸部拍摄的待机状态。In some embodiments, when the predicted display content changes in real time, pre-acquisition preparations may be made for the predicted display content at the future time point based on the perspective information of each participant at the future time point. For example, if the content to be marked in the virtual scene is a doctor's surgery operation, and the doctor's surgery operation corresponding to the future time point is a chest surgery operation, then the camera at the corresponding position of the participant's perspective information can enter the standby state for chest shooting in advance.
通过预测参与者的未来运动轨迹来预测展示内容,可以预先准备相应的未来时间点的展示内容,或者预先准备对未来时间点的展示内容的采集,从而提高加载速度,优化参与者的使用体验。Predicting the display content by predicting the future trajectory of the participants can pre-prepare the display content at the corresponding future time point, or pre-prepare the collection of the display content at the future time point, thereby improving the loading speed and optimizing the user experience of the participants.
需要说明的是,不同实施例可能产生的有益效果不同,在不同的实施例里,可能产生的有益效果可以是以上任意一种或几种的组合,也可以是其他任何可能获得的有益效果。It should be noted that different embodiments may have different beneficial effects. In different embodiments, the possible beneficial effects may be any one or a combination of the above, or any other possible beneficial effects.
上文已对基本概念做了描述,显然,对于本领域技术人员来说,上述详细披露仅仅作为示例,而并不构成对本说明书的限定。虽然此处并没有明确说明,本领域技术人员可能会对本说明书进行各种修改、改进和修正。该类修改、改进和修正在本说明书中被建议,所以该类修改、改进、修正仍属于本说明书示范实施例的精神和范围。The basic concept has been described above, obviously, for those skilled in the art, the above detailed disclosure is only an example, and does not constitute a limitation to this description. Although not expressly stated here, those skilled in the art may make various modifications, improvements and corrections to this description. Such modifications, improvements and corrections are suggested in this specification, so such modifications, improvements and corrections still belong to the spirit and scope of the exemplary embodiments of this specification.
同时,本说明书使用了特定词语来描述本说明书的实施例。如“一个实施例”、“一实施例”、和/或“一些实施例”意指与本说明书至少一个实施例相关的某一特征、结构或特点。因此,应强调并注意的是,本说明书中在不同位置两次或多次提及的“一实施例”或“一个实施例”或“一个替代性实施例”并不一定是指同一实施例。此外,本说明书的一个或多个实施例中的某些特征、结构或特点可以进行适当的组合。Meanwhile, this specification uses specific words to describe the embodiments of this specification. For example, "one embodiment", "an embodiment", and/or "some embodiments" refer to a certain feature, structure or characteristic related to at least one embodiment of this specification. Therefore, it should be emphasized and noted that two or more references to "an embodiment" or "an embodiment" or "an alternative embodiment" in different places in this specification do not necessarily refer to the same embodiment . In addition, certain features, structures or characteristics in one or more embodiments of this specification may be properly combined.
此外,除非权利要求中明确说明,本说明书所述处理元素和序列的顺序、数字字母的使用、或其他名称的使用,并非用于限定本说明书流程和方法的顺序。尽管上述披露中通过各种示例讨论了一些目前认为有用的实施例,但应当理解的是,该类细节仅起到说明的目的,附加的权利要求并不仅限于披露的实施例,相反,权利要求旨在覆盖所有符合本说明书实施例实质和范围的修正和等价组合。例如,虽然以上所描述的系统组件可以通过硬件设备实现,但是也可以只通过软件的解决方案得以实现,如在现有的服务器或移动设备上安装所描述的系统。In addition, unless explicitly stated in the claims, the order of processing elements and sequences described in this specification, the use of numbers and letters, or the use of other names are not used to limit the sequence of processes and methods in this specification. While the foregoing disclosure has discussed, by way of various examples, some embodiments that are presently believed to be useful, it should be understood that such detail is for purposes of illustration only and that the appended claims are not limited to the disclosed embodiments, but instead claim It is intended to cover all modifications and equivalent combinations consistent with the spirit and scope of the embodiments of this specification. For example, although the system components described above may be implemented by hardware devices, they may also be implemented by a software-only solution, such as installing the described system on an existing server or mobile device.
同理,应当注意的是,为了简化本说明书披露的表述,从而帮助对一个或多个实施例的理解,前文对本说明书实施例的描述中,有时会将多种特征归并至一个实施例、附图或对其的描述中。但是,这种披露方法并不意味着本说明书对象所需要的特征比权利要求中提及的特征多。实际上,实施例的特征要少于上述披露的单个实施例的全部特征。In the same way, it should be noted that in order to simplify the expression disclosed in this specification and help the understanding of one or more embodiments, in the foregoing description of the embodiments of this specification, sometimes multiple features are combined into one embodiment, appended Figures or their descriptions. This method of disclosure does not, however, imply that the subject matter of the specification requires more features than are recited in the claims. Indeed, embodiment features are less than all features of a single foregoing disclosed embodiment.
一些实施例中使用了描述成分、属性数量的数字,应当理解的是,此类用于实施例描述的数字,在一些示例中使用了修饰词“大约”、“近似”或“大体上”来修饰。除非另外说明,“大约”、“近似”或“大体上”表明所述数字允许有±20%的变化。相应地,在一些实施例中,说明书和权利要求中使用的数值参数均为近似值,该近似值根据个别实施例所需特点可以发生改变。在一些实施例中,数值参数应考虑规定的有效数位并采用一般位数保留的方法。尽管本说明书一些实施例中用于确认其范围广度的数值域和参数为近似值,在具体实施例中,此类数值的设定在可行范围内尽可能精确。In some embodiments, numbers describing the quantity of components and attributes are used. It should be understood that such numbers used in the description of the embodiments use the modifiers "about", "approximately" or "substantially" in some examples. grooming. Unless otherwise stated, "about", "approximately" or "substantially" indicates that the stated figure allows for a variation of ±20%. Accordingly, in some embodiments, the numerical parameters used in the specification and claims are approximations that can vary depending upon the desired characteristics of individual embodiments. In some embodiments, numerical parameters should take into account the specified significant digits and adopt the general digit reservation method. Although the numerical ranges and parameters used in some embodiments of this specification to confirm the breadth of the range are approximations, in specific embodiments, such numerical values are set as precisely as practicable.
针对本说明书引用的每个专利、专利申请、专利申请公开物和其他材料,如文章、书籍、说明书、出版物、文档等,特此将其全部内容并入本说明书作为参考。与本说明书内容不一致或产生冲突的申请历史文件除外,对本说明书权利要求最广范围有限制的文件(当前或之后附加于本说明书中的)也除外。需要说明的是,如果本说明书附属材料中的描述、定义、和/或术语的使用与本说明书所述内容有不一致或冲突的地方,以本说明书的描述、定义和/或术语的使用为准。Each patent, patent application, patent application publication, and other material, such as article, book, specification, publication, document, etc., cited in this specification is hereby incorporated by reference in its entirety. Application history documents that are inconsistent with or conflict with the content of this specification are excluded, and documents (currently or later appended to this specification) that limit the broadest scope of the claims of this specification are also excluded. It should be noted that if there is any inconsistency or conflict between the descriptions, definitions, and/or terms used in the accompanying materials of this manual and the contents of this manual, the descriptions, definitions and/or terms used in this manual shall prevail .
最后,应当理解的是,本说明书中所述实施例仅用以说明本说明书实施例的原则。其他的变形也可能属于本说明书的范围。因此,作为示例而非限制,本说明书实施例的替代配置可视为与本说明书的教导一致。相应地,本说明书的实施例不仅限于本说明书明确介绍和描述的实施例。Finally, it should be understood that the embodiments described in this specification are only used to illustrate the principles of the embodiments of this specification. Other modifications are also possible within the scope of this description. Therefore, by way of example and not limitation, alternative configurations of the embodiments of this specification may be considered consistent with the teachings of this specification. Accordingly, the embodiments of this specification are not limited to the embodiments explicitly introduced and described in this specification.
Claims (10)
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211191561.5A CN117826976A (en) | 2022-09-28 | 2022-09-28 | XR-based multi-person collaboration method and system |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211191561.5A Division CN117826976A (en) | 2022-09-28 | 2022-09-28 | XR-based multi-person collaboration method and system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115576427A true CN115576427A (en) | 2023-01-06 |
Family
ID=84980721
Family Applications (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211322357.2A Pending CN115576427A (en) | 2022-09-28 | 2022-09-28 | XR-based multi-user online live broadcast and system |
CN202211408599.3A Pending CN117111724A (en) | 2022-09-28 | 2022-09-28 | Data processing method and system for XR |
CN202211191561.5A Pending CN117826976A (en) | 2022-09-28 | 2022-09-28 | XR-based multi-person collaboration method and system |
Family Applications After (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211408599.3A Pending CN117111724A (en) | 2022-09-28 | 2022-09-28 | Data processing method and system for XR |
CN202211191561.5A Pending CN117826976A (en) | 2022-09-28 | 2022-09-28 | XR-based multi-person collaboration method and system |
Country Status (1)
Country | Link |
---|---|
CN (3) | CN115576427A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117041474A (en) * | 2023-09-07 | 2023-11-10 | 腾讯烟台新工科研究院 | Remote conference system and method based on virtual reality and artificial intelligence technology |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102263772A (en) * | 2010-05-28 | 2011-11-30 | 经典时空科技(北京)有限公司 | Virtual conference system based on three-dimensional technology |
CN109683706A (en) * | 2018-12-10 | 2019-04-26 | 中车青岛四方机车车辆股份有限公司 | A kind of method and system of the more people's interactions of virtual reality |
CN111984114A (en) * | 2020-07-20 | 2020-11-24 | 深圳盈天下视觉科技有限公司 | Multi-person interaction system based on virtual space and multi-person interaction method thereof |
KR102283301B1 (en) * | 2020-12-31 | 2021-07-29 | 더에이치알더 주식회사 | Apparatus and Method for Providing real time comunication platform based on XR |
CN114092670A (en) * | 2021-11-12 | 2022-02-25 | 深圳市慧鲤科技有限公司 | Virtual reality display method, equipment and storage medium |
-
2022
- 2022-09-28 CN CN202211322357.2A patent/CN115576427A/en active Pending
- 2022-09-28 CN CN202211408599.3A patent/CN117111724A/en active Pending
- 2022-09-28 CN CN202211191561.5A patent/CN117826976A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102263772A (en) * | 2010-05-28 | 2011-11-30 | 经典时空科技(北京)有限公司 | Virtual conference system based on three-dimensional technology |
CN109683706A (en) * | 2018-12-10 | 2019-04-26 | 中车青岛四方机车车辆股份有限公司 | A kind of method and system of the more people's interactions of virtual reality |
CN111984114A (en) * | 2020-07-20 | 2020-11-24 | 深圳盈天下视觉科技有限公司 | Multi-person interaction system based on virtual space and multi-person interaction method thereof |
KR102283301B1 (en) * | 2020-12-31 | 2021-07-29 | 더에이치알더 주식회사 | Apparatus and Method for Providing real time comunication platform based on XR |
CN114092670A (en) * | 2021-11-12 | 2022-02-25 | 深圳市慧鲤科技有限公司 | Virtual reality display method, equipment and storage medium |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117041474A (en) * | 2023-09-07 | 2023-11-10 | 腾讯烟台新工科研究院 | Remote conference system and method based on virtual reality and artificial intelligence technology |
Also Published As
Publication number | Publication date |
---|---|
CN117111724A (en) | 2023-11-24 |
CN117826976A (en) | 2024-04-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Sereno et al. | Collaborative work in augmented reality: A survey | |
US10672288B2 (en) | Augmented and virtual reality simulator for professional and educational training | |
Van Krevelen et al. | A survey of augmented reality technologies, applications and limitations | |
US11928384B2 (en) | Systems and methods for virtual and augmented reality | |
US20120192088A1 (en) | Method and system for physical mapping in a virtual world | |
US11442685B2 (en) | Remote interaction via bi-directional mixed-reality telepresence | |
CN111880659A (en) | Virtual character control method and device, equipment and computer readable storage medium | |
US20220277506A1 (en) | Motion-based online interactive platform | |
CN102668556A (en) | Medical support apparatus, medical support method, and medical support system | |
CN108830944A (en) | Optical perspective formula three-dimensional near-eye display system and display methods | |
WO2025082015A1 (en) | Method, apparatus, and device for generating virtual reality display image, and medium | |
US20240135617A1 (en) | Online interactive platform with motion detection | |
Sereno et al. | Point specification in collaborative visualization for 3D scalar fields using augmented reality | |
US20230360336A1 (en) | Collaborative mixed-reality system for immersive surgical telementoring | |
Camporesi et al. | The effects of avatars, stereo vision and display size on reaching and motion reproduction | |
Schier et al. | Viewr: Architectural-scale multi-user mixed reality with mobile head-mounted displays | |
CN115576427A (en) | XR-based multi-user online live broadcast and system | |
Chang et al. | Efficient VR-AR communication method using virtual replicas in XR remote collaboration | |
Arora et al. | Introduction to 3d sketching | |
Wang et al. | Avicol: Adaptive visual instruction for remote collaboration using mixed reality | |
CN116661600A (en) | Multi-person collaborative surgery virtual training system based on multi-view behavior recognition | |
CN115407878A (en) | XR gesture input implementation method and system | |
WO2022129646A1 (en) | Virtual reality environment | |
Anabtawi et al. | A holographic telementoring system depicting surgical instrument movements for real-time guidance in open surgeries | |
TW202038255A (en) | 360 vr volumetric media editor |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
TA01 | Transfer of patent application right |
Effective date of registration: 20231211 Address after: No. 2555 Yinzhou Avenue, Yinzhou District, Ningbo City, Zhejiang Province, 315100 Applicant after: NINGBO LONGTAI MEDICAL TECHNOLOGY Co.,Ltd. Address before: 17 / F, Zhaoying commercial building, 151-155 Queen's Road Central, Hong Kong, China Applicant before: Intuitive Vision Co.,Ltd. |
|
TA01 | Transfer of patent application right |