WO2023240999A1 - 虚拟现实场景确定方法、装置及系统 - Google Patents

虚拟现实场景确定方法、装置及系统 Download PDF

Info

Publication number
WO2023240999A1
WO2023240999A1 PCT/CN2022/142395 CN2022142395W WO2023240999A1 WO 2023240999 A1 WO2023240999 A1 WO 2023240999A1 CN 2022142395 W CN2022142395 W CN 2022142395W WO 2023240999 A1 WO2023240999 A1 WO 2023240999A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
predicted
pose information
rendering
scene
Prior art date
Application number
PCT/CN2022/142395
Other languages
English (en)
French (fr)
Inventor
王康
张佳宁
张道宁
Original Assignee
北京凌宇智控科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京凌宇智控科技有限公司 filed Critical 北京凌宇智控科技有限公司
Publication of WO2023240999A1 publication Critical patent/WO2023240999A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods

Definitions

  • This article relates to virtual reality technology, especially a method, device and system for determining virtual reality scenes.
  • VR Virtual Reality
  • VR technology is an advanced human-machine technology that comprehensively applies computer graphics, human-computer interface technology, sensor technology, and artificial intelligence technologies to create a realistic artificial simulation environment and can effectively simulate various human perceptions in the natural environment.
  • interactive technology Currently, VR is booming, but if you want users to get a better experience, you often need to rely on high-performance rendering methods.
  • the existing rendering method is for the client to obtain the user's current pose information, and then send the obtained pose information to the cloud server. After processing, the cloud server returns redundant rendering scenes corresponding to the pose information to the client. , the client then intercepts the rendering scene based on the user's current pose information to display it to the user.
  • the cloud server in the existing method often gives a larger field of view redundancy (that is, the field of view of the rendering scene returned to the client will be very large ), resulting in a large amount of data processing on the client side, large data loss, and low resolution of the final intercepted rendering scene.
  • the virtual reality scene determination method, device and system provided by this application can reduce the redundancy of the field of view angle returned by the cloud server to the client rendering scene, thereby enabling the client to process less data, reduce data loss and ultimately intercept the rendering.
  • the scene resolution is high.
  • This application provides a method for determining a virtual reality scene, which is applied to client devices.
  • the method includes:
  • Receive predicted pose information and predicted rendering scenes returned by the cloud server wherein the predicted pose information is predicted by the cloud server based on the first pose information, the first time information and network status. , the predicted rendering scene is obtained by the cloud server performing scene rendering based on the predicted pose information;
  • the virtual reality scene displayed to the user is determined according to the second pose information, the predicted pose information and the predicted rendering scene.
  • determining a virtual reality scene to be displayed to the user based on the second pose information, the predicted pose information, and the predicted rendering scene includes:
  • the adjustment information of the rendering camera in the client device is obtained according to the second pose information and the predicted pose information; wherein the adjustment information includes: a rotation angle and a displacement vector;
  • the predicted rendering scene is intercepted according to the adjusted rendering camera to obtain the virtual reality scene.
  • the pose information includes: rotation information and position information
  • the adjustment information of the rendering camera in the client device is obtained according to the second pose information and the predicted pose information.
  • the displacement vector of the rendering camera is calculated according to the position information in the predicted pose information and the second pose information.
  • calculating an angle difference based on the rotation information in the predicted pose information and the second pose information to obtain the rotation angle of the rendering camera includes:
  • the angle difference is calculated through the obtained two sets of quaternions to obtain the rotation angle of the rendering camera.
  • This application also provides a method for determining a virtual reality scene, which is applied to a cloud server.
  • the method includes:
  • calculating predicted pose information based on the first pose information, the first moment information and network status includes:
  • the first posture information use a machine learning algorithm to predict the user's movement trajectory
  • Third time information that predicts a third time based on the first time information and the current network status.
  • the third time information is the time predicted by the cloud server when the predicted rendering scene is returned to the client device. information;
  • the predicted pose information is calculated based on the predicted motion trajectory and the third time information.
  • the field of view angle of the predicted rendering scene is greater than the field of view angle of the client device.
  • This application also provides a virtual reality scene determination device, which is applied to client equipment, including:
  • the first acquisition module is configured to acquire the user's first posture information at the first moment and the first moment information at the first moment;
  • a sending module configured to send the first attitude information and the first time information to the cloud server
  • a receiving module configured to receive predicted pose information and predicted rendering scenes returned by the cloud server; wherein the predicted pose information is obtained by the cloud server based on the first pose information and the first moment information. And the network status is predicted, and the predicted rendering scene is obtained by the cloud server performing scene rendering based on the predicted pose information;
  • the second acquisition module is configured to acquire the second pose information of the user at the second moment
  • a determining module configured to determine a virtual reality scene to be displayed to the user based on the second pose information, the predicted pose information, and the predicted rendering scene.
  • This application also provides a virtual reality scene determination device applied to a cloud server, including:
  • a receiving module configured to receive the user's first posture information at the first moment and the first moment information at the first moment sent by the client device;
  • a calculation module configured to calculate predicted pose information based on the first pose information, the first moment information, and network status
  • a determination module configured to perform scene rendering according to the predicted pose information and determine the predicted rendering scene
  • a sending module configured to send the predicted pose information and the predicted rendering scene to the client device, so that the client device can use the user's second pose information at the second moment, the predicted The pose information and the predicted rendered scene determine the virtual reality scene displayed to the user.
  • This application also provides a client device, including a first memory and a first processor.
  • the first memory stores a computer program, and the computer program on the first memory is executed by the first processor.
  • This application also provides a cloud server, including a second memory and a second processor.
  • the second memory stores a computer program.
  • the computer program on the second memory is executed by the second processor, Execute any of the above virtual reality scene determination methods with the cloud server side as the execution subject.
  • This application also provides a virtual reality system, including a client device as described above, and a cloud server as described above.
  • the virtual reality scene determination method, device and system provided by this application, because the predicted rendering scene received by the client device is obtained by the cloud server rendering the scene based on the user's predicted pose information, therefore This reduces the redundancy of field of view angles returned by the cloud server to the client for rendering scenes, resulting in less data processing on the client, less data loss, and high resolution of the final intercepted rendering scene.
  • Figure 1 is a schematic flow chart of a virtual reality scene determination method provided by an embodiment of the present application
  • Figure 2 is a schematic flow chart of another virtual reality scene determination method provided by an embodiment of the present application.
  • Figure 3 is a schematic structural diagram of a virtual reality scene determination client device provided by an embodiment of the present application.
  • Figure 4 is a schematic structural diagram of a virtual reality scene determination cloud server provided by an embodiment of the present application.
  • This article provides a method for determining a virtual reality scene, which is applied to a client device.
  • the client device and the cloud server form a virtual reality system, as shown in Figure 1.
  • the method includes:
  • Step 101 Obtain the first posture information of the user at the first moment and the first moment information of the first moment;
  • Step 102 Send the first attitude information and the first time information to the cloud server.
  • the time information may specifically be a timestamp.
  • the client device captures the first posture information of the user at the current first moment, and adds a timestamp to the obtained first posture information. The added timestamp The first position information is sent to the cloud server.
  • the pose information includes position information obtained through a locator of the client device and posture information obtained through a sensor on the client device.
  • the position information includes position information along the three rectangular coordinate axes directions of X, Y, and Z
  • the posture information includes posture information Pitch, Yaw, and Roll around the three rectangular coordinate axes directions of X, Y, and Z, where , Pitch is the pitch angle rotating around the X-axis, Yaw is the yaw angle rotating around the Y-axis, and Roll is the roll angle rotating around the Z-axis.
  • the position information along the three rectangular coordinate axes of X, Y, and Z and the posture information Pitch, Yaw, and Roll around the three rectangular coordinate axes of X, Y, and Z are collectively referred to as six-degree-of-freedom information.
  • the client device may include: a head-mounted display and an interactive device.
  • the interactive device may include a handle, a keyboard, and a smart finger cot.
  • the head-mounted display includes the following types. One is a mobile terminal
  • the head-mounted display is a box + mobile phone.
  • the box is just a VR shell with a simple structure and low price.
  • the mobile phone needs to be placed inside to act as a screen and computing hardware to work; one is a PC-side head-mounted display, which needs to be connected to a It requires a high-performance computer to operate. With the help of external hardware to assist in calculations, the user experience is better and it has an independent screen.
  • the product structure is complex and is bound by data cables, so it cannot move freely; one is an all-in-one head-mounted display, which depends on the device.
  • the built-in hardware completes the calculation and has independent calculation, input and output functions. You can enjoy the visual impact of three-dimensional (3D) stereoscopic feeling in the virtual world without using any input and output devices; one is an external mobile terminal Wearable monitors, smartphones or tablets, etc. are externally connected to the head-mounted display; one is to connect the external processing end to the head-mounted display, and the processing end is external to the head-mounted display.
  • Step 103 Receive the predicted pose information and predicted rendering scene returned by the cloud server.
  • the predicted pose information is predicted by the cloud server based on the first pose information, the first time information and network status, and the predicted rendering scene is predicted by the cloud server based on the predicted position.
  • the posture information is obtained by rendering the scene.
  • the predictive rendering scene received by the client device is in the form of image frame stream information.
  • the client device After receiving the image frame code stream information, the client device decodes the image frame code stream information.
  • Step 104 Obtain the second pose information of the user at the second moment
  • Step 105 Determine a virtual reality scene to be displayed to the user based on the second pose information, the predicted pose information, and the predicted rendering scene.
  • the rendering method further includes the following steps: displaying the virtual reality scene.
  • the virtual reality scene determination method introduces the prediction of user pose information in the generation process of the predicted rendering scene, so that the predicted rendering scene can be obtained more accurately, thus reducing the field of view (Field of view). , FOV) redundancy, which results in less data processing on the client, less data loss and high resolution of the final intercepted rendering scene.
  • FOV field of view
  • determining a virtual reality scene to be displayed to the user based on the second pose information, the predicted pose information, and the predicted rendering scene includes:
  • the adjustment information of the rendering camera in the client device is obtained according to the second pose information and the predicted pose information; wherein the adjustment information includes: a rotation angle and a displacement vector.
  • the predicted rendering scene is intercepted according to the adjusted rendering camera to obtain the virtual reality scene.
  • the field of view angle of the rendering camera in the client device is fixed.
  • the rotation angle and displacement vector of the rendering camera is adjusted to intercept the predicted rendering scene.
  • the rendering corresponding to the second pose information at the second moment can be obtained.
  • Scene that is, virtual reality scene.
  • the pose information includes: rotation information and position information
  • the adjustment information of the rendering camera in the client device is obtained according to the second pose information and the predicted pose information, including :
  • the displacement vector of the rendering camera is calculated according to the position information in the predicted pose information and the second pose information.
  • calculating an angle difference based on the rotation information in the predicted pose information and the second pose information to obtain the rotation angle of the rendering camera includes:
  • the rotation information in the predicted pose information and the second pose information is converted into quaternions.
  • the angle difference is calculated through the obtained two sets of quaternions to obtain the rotation angle of the rendering camera.
  • This article also provides a method for determining a virtual reality scene, which is applied to a cloud server.
  • the cloud server and client device form a virtual reality system, including: As shown in Figure 2, the method includes:
  • Step 201 Receive the user's first posture information at the first moment and the first moment information at the first moment sent by the client device.
  • Step 202 Calculate predicted pose information based on the first pose information, the first time information and network status.
  • Step 203 Perform scene rendering based on the predicted pose information to determine the predicted rendering scene
  • Step 204 Send the predicted pose information and the predicted rendering scene to the client device, so that the client device can use the user's second pose information at the second moment and the predicted pose information to and the predicted rendered scene to determine a virtual reality scene displayed to the user.
  • the rendering method provided by the embodiment of the present application introduces the prediction of user pose information in the generation process of the predicted rendering scene, so that the predicted rendering scene can be obtained more accurately, thereby reducing the redundancy of the field of view, thereby enabling customers to
  • the amount of data processing on the end is small, the data loss is small, and the final intercepted rendering scene has a high resolution.
  • calculating predicted pose information based on the first pose information, the first moment information, and network status includes:
  • a machine learning algorithm is used to predict the user's movement trajectory based on the first posture information.
  • the third time information is the time when the cloud server returns the predicted rendering scene to the client device. time information.
  • the predicted pose information is calculated based on the predicted motion trajectory and the third time information.
  • the field of view angle of the predicted rendering scene is greater than the field of view angle of the client.
  • the first moment in this article can be understood as the initial moment
  • the first posture information can be understood as the initial posture information
  • the second moment in this article can be understood as the client device receiving the
  • the second pose information can be understood as the pose information at that moment when the cloud server predicts the rendering scene
  • the third moment in this article is when the cloud server predicts that the predicted rendering scene will be returned to the client
  • the time of the device, or the time when the client device receives the predicted rendering scene predicted by the cloud server is determined by the cloud server based on the current network status and the first time information.
  • This article also provides a virtual reality scene determination device, which is applied to the client device.
  • the virtual reality scene determination device 3 applied to the client device includes:
  • the first acquisition module 31 is configured to acquire the user's first posture information at the first moment and the first moment information at the first moment.
  • the sending module 32 is configured to send the first attitude information and the first time information to the cloud server.
  • the receiving module 33 is configured to receive the predicted pose information and the predicted rendering scene returned by the cloud server; wherein the predicted pose information is obtained by the cloud server based on the first pose information, the first moment Information and network status are predicted, and the predicted rendering scene is obtained by the cloud server performing scene rendering based on the predicted pose information.
  • the second acquisition module 34 is configured to acquire the second pose information of the user at the second moment.
  • the determining module 35 is configured to determine a virtual reality scene to be displayed to the user based on the second pose information, the predicted pose information, and the predicted rendering scene.
  • the determining module 35 is specifically used to:
  • the adjustment information of the rendering camera in the client device is obtained according to the second pose information and the predicted pose information; wherein the adjustment information includes: a rotation angle and a displacement vector.
  • the rendering camera is adjusted according to the obtained adjustment information.
  • the predicted rendering scene is intercepted according to the adjusted rendering camera to obtain the virtual reality scene.
  • the pose information includes: rotation information and position information.
  • the determining module 35 is also specifically used to:
  • the angle difference is calculated according to the rotation information in the predicted pose information and the second pose information to obtain the rotation angle of the rendering camera.
  • the displacement vector of the rendering camera is calculated according to the position information in the predicted pose information and the second pose information.
  • the determining module 35 is also specifically used to:
  • the angle difference is calculated through the obtained two sets of quaternions to obtain the rotation angle of the rendering camera.
  • the virtual reality scene determination device applied to a client device introduces the prediction of user pose information in the generation process of the predicted rendering scene, so that the predicted rendering scene can be obtained more accurately, thus reducing the number of views.
  • the redundancy of Field of View (FOV) results in less data processing on the client, less data loss, and high resolution of the final intercepted rendered scene.
  • This article also provides a virtual reality scene determination device applied to a cloud server.
  • the virtual reality scene determination device 4 applied to a cloud server includes:
  • the receiving module 41 is configured to receive the user's first posture information at the first moment and the first moment information at the first moment sent by the client device.
  • the calculation module 42 is configured to calculate predicted pose information based on the first pose information, the first time information, and network status.
  • the determination module 43 is configured to perform scene rendering according to the predicted pose information and determine the predicted rendering scene.
  • the sending module 44 is configured to send the predicted pose information and the predicted rendering scene to the client device, so that the client device can calculate the predicted pose information and the predicted rendering scene according to the user's second pose information at the second moment.
  • the predicted pose information and the predicted rendered scene determine the virtual reality scene displayed to the user.
  • the computing module 42 is specifically used to:
  • a machine learning algorithm is used to predict the user's movement trajectory.
  • Third time information that predicts a third time based on the first time information and the current network status.
  • the third time information is the time predicted by the cloud server when the predicted rendering scene is returned to the client device. information.
  • the predicted pose information is calculated based on the predicted motion trajectory and the third time information.
  • the field of view angle of the predicted rendering scene is greater than the field of view angle of the client device.
  • the virtual reality scene determination applied to the cloud server provided by the embodiment of the present application, because the generation process of the predicted rendering scene introduces the prediction of user pose information, the predicted rendering scene can be obtained more accurately, thus reducing the field of view angle Redundant, resulting in less data processing on the client side, less data loss, and high resolution of the final intercepted rendering scene.
  • This article also provides a client device, including a first memory and a first processor.
  • the first memory stores a computer program.
  • the computer program on the first memory is executed by the first processor, Execute any of the above rendering methods with the client device as the execution subject.
  • This article also provides a cloud server, including a second memory and a second processor.
  • the second memory stores a computer program.
  • the computer program on the second memory is executed when executed by the second processor.
  • This article also provides a virtual reality system, including the above-mentioned client device and the above-mentioned cloud server.
  • computer storage media includes volatile and nonvolatile media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. removable, removable and non-removable media.
  • Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, Digital Versatile Disk (DVD) or other optical disk storage, magnetic cassettes, tapes, disk storage or other magnetic storage devices, or may Any other medium used to store the desired information and that can be accessed by a computer.
  • communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism, and may include any information delivery media .

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Geometry (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Processing Or Creating Images (AREA)

Abstract

一种虚拟现实场景确定方法、装置及系统,该方法应用于客户端设备,客户端设备和云端服务器组成虚拟现实系统,包括:获取用户在第一时刻的第一位姿信息和第一时刻的第一时刻信息;将所述第一位姿信息和所述第一时刻信息发送给云端服务器;接收所述云端服务器返回的预测位姿信息以及预测渲染场景;获取所述用户在第二时刻的第二位姿信息;根据所述第二位姿信息、所述预测位姿信息和所述预测渲染场景确定显示给所述用户的虚拟现实场景。由于客户端设备接收的预测渲染场景是云端服务器根据用户的预测位姿信息进行场景渲染得到的,因此减少了视场角冗余,从而使得客户端数据处理量少、数据损耗小且最终截取的渲染场景分辨率高。

Description

虚拟现实场景确定方法、装置及系统
本申请要求申请日为2022年6月14日、申请号为“202210674778.5”、专利名称为“虚拟现实场景确定方法、装置及系统”的中国发明专利申请的优先权,其全部内容在此引入作为参考。
技术领域
本文涉及虚拟现实技术,尤指一种虚拟现实场景确定方法、装置及系统。
背景技术
虚拟现实(Virtual Reality,VR)是指采用计算机技术为核心,利用现代高科技手段生成的一种虚拟环境,用户借助特殊的输入/输出设备,通过视觉、听觉和触觉等获得与真实世界相同的感受。VR技术是一种综合应用计算机图形学、人机接口技术、传感器技术以及人工智能等技术,制造逼真的人工模拟环境,并能有效地模拟人在自然环境中的各种感知的高级的人机交互技术。目前,VR呈现蓬勃发展的势头,但是,如果想让用户获得较好的使用体验,往往需要依赖高性能的渲染方法。
现有的渲染方法是由客户端获取用户的当前位姿信息,然后将获取到的位姿信息发送给云端服务器,云端服务器经过处理,将位姿信息相应的冗余的渲染场景返回给客户端,客户端再根据用户当前位姿信息截取渲染场景,以显示给用户。
然而,为了让客户端能够根据当前位姿信息截取到对应的渲染场景,现有方法中云端服务器往往会给予较大的视场角冗余(即返回给客户端的渲染场景视场角会很大),从而使得客户端的数据处理量多、数据损耗大且最终截取的渲染场景分辨率低。
发明内容
本申请提供的一种虚拟现实场景确定方法、装置及系统,能够减少云端服务器返回给客户端渲染场景的视场角冗余,从而使得客户端数据处理量少、数据损耗小且最终截取的渲染场景分辨率高。
本申请提供了一种虚拟现实场景确定方法,应用于客户端设备,所述方法包括:
获取用户在第一时刻的第一位姿信息和第一时刻的第一时刻信息;
将所述第一位姿信息和所述第一时刻信息发送给云端服务器;
接收所述云端服务器返回的预测位姿信息以及预测渲染场景;其中,所述预测位姿信息由所述云端服务器根据所述第一位姿信息、所述第一时刻信息以及网络状态进行预测得到,所述预测渲染场景由所述云端服务器根据所述预测位姿信息进行场景渲染得到;
获取所述用户在第二时刻的第二位姿信息;
根据所述第二位姿信息、所述预测位姿信息和所述预测渲染场景确定显示给所述用户的虚拟现实场景。
在一种可选的实施方式中,根据所述第二位姿信息、所述预测位姿信息和所述预测渲染场景确定显示给所述用户的虚拟现实场景,包括:
根据所述第二位姿信息和所述预测位姿信息获取所述客户端设备中渲染相机的调整信息;其中,所述调整信息包括:旋转角度和位移向量;
根据获得的所述调整信息调整所述渲染相机;
根据调整后的所述渲染相机截取所述预测渲染场景,得到所述虚拟现实场景。
在一种可选的实施方式中,所述位姿信息包括:旋转信息和位置信息,根据所述第二位姿信息和所述预测位姿信息获取所述客户端设备中渲染相机的调整信息,包括:
设置所述渲染相机的位置为坐标系原点且正对待渲染画面;
根据所述预测位姿信息与所述第二位姿信息中的旋转信息计算角度差,以得到所述渲染相机的旋转角度;
根据所述预测位姿信息与所述第二位姿信息中的位置信息计算所述渲染相机的位移向量。
在一种可选的实施方式中,根据所述预测位姿信息与所述第二位姿信息中的旋转信息计算角度差,以得到所述渲染相机的旋转角度,包括:
将所述预测位姿信息与所述第二位姿信息中的旋转信息转化为四元数;
通过获得的两组四元数计算角度差,以得到所述渲染相机的旋转角度。
本申请还提供了一种虚拟现实场景确定方法,应用于云端服务器,所述方法包括:
接收客户端设备发送的用户在第一时刻的第一位姿信息和第一时刻的第一时刻信息;
根据所述第一位姿信息、所述第一时刻信息以及网络状态计算预测位姿信息;
根据所述预测位姿信息进行场景渲染,确定预测渲染场景;
将所述预测位姿信息和所述预测渲染场景发送至所述客户端设备,以便于所述客户端设备根据用户在第二时刻的第二位姿信息、所述预测位姿信息和所述预测渲染场景确定显示给所述用户的虚拟现实场景。
在一种可选的实施方式中,根据所述第一位姿信息、所述第一时刻信息以及网络状态计算预测位姿信息,包括:
根据所述第一位姿信息,采用机器学习算法预测用户的运动轨迹;
根据所述第一时刻信息以及当前网络状态预测第三时刻的第三时刻信息,所述第三时刻信息为所述云端服务器预测的将所述预测渲染场景返回至所述客户端设备时的时刻信息;
根据预测的所述运动轨迹以及所述第三时刻信息计算所述预测位姿信息。
在一种可选的实施方式中,所述预测渲染场景的视场角大于客户端设备的视场角。
本申请还提供了一种虚拟现实场景确定装置,应用于客户端设备,包括:
第一获取模块,被配置为获取用户在第一时刻的第一位姿信息和第一时刻的第一时刻信息;
发送模块,被配置为将所述第一位姿信息和所述第一时刻信息发送给云端服务器;
接收模块,被配置为接收所述云端服务器返回的预测位姿信息以及预测渲染场景;其中,所述预测位姿信息由所述云端服务器根据所述第一位姿信息、所述第一时刻信息以及网络状态进行预测得到,所述预测渲染场景由所述云端服务器根据所述预测位姿信息进行场景渲染得到;
第二获取模块,被配置为获取所述用户在第二时刻的第二位姿信息;
确定模块,被配置为根据所述第二位姿信息、所述预测位姿信息和所述预测渲染场景确定显示给所述用户的虚拟现实场景。
本申请还提供了一种虚拟现实场景确定装置,应用于云端服务器,包括:
接收模块,被配置为接收客户端设备发送的用户在第一时刻的第一位姿信息和第一时刻的第一时刻信息;
计算模块,被配置为根据所述第一位姿信息、所述第一时刻信息以及网络状态计算预测位姿信息;
确定模块,被配置为根据所述预测位姿信息进行场景渲染,确定预测渲染场景;
发送模块,被配置为将所述预测位姿信息和所述预测渲染场景发送至所述客户端设备,以便于所述客户端设备根据用户在第二时刻的第二位姿信息、所述预测位姿信息和所述预测渲染场景确定显示给所述用户的虚拟现实场景。
本申请还提供了一种客户端设备,包括第一存储器和第一处理器,所述第一存储器存储上保存有计算机程序,所述第一存储器上的计算机程序被所述第一处理器执行时执行如上任意一种以客户端设备侧为执行主体的虚拟现实场景确定方法。
本申请还提供了一种云端服务器,包括第二存储器和第二处理器,所述第二存储器存储上保存有计算机程序,所述第二存储器上的计算机程序被所述第二处理器执行时执行如上任意一种以云端服务器侧为执行主体的虚拟现实场景确定方法。
本申请还提供了一种虚拟现实系统,包括如上所述的客户端设备,以及如上所述的云端服务器。
与相关技术相比,本申请提供的虚拟现实场景确定方法、装置及系统,由于客户端设备接收的预测渲染场景是所述云端服务器根据所述用户的预测位姿信息进行场景渲染得到的,因此减少了云端服务器返回给客户端渲染场景的视场角冗余,从而使得客户端数据处理量少、数据损耗小且最终截取的渲染场景分辨率高。
本申请的其它特征和优点将在随后的说明书中阐述,并且,部分地从说明书中变得显而易见,或者通过实施本申请而了解。本申请的其他优点可通过在说明书以及附图中所描述的方案来实现和获得。
附图说明
附图用来提供对本申请技术方案的理解,并且构成说明书的一部分,与本申请的实施例一起用于解释本申请的技术方案,并不构成对本申请技术方案的限制。
图1为本申请实施例提供的一种虚拟现实场景确定方法的流程示意图;
图2为本申请实施例提供的另一种虚拟现实场景确定方法的流程示意图;
图3为本申请实施例提供的一种虚拟现实场景确定客户端设备的结构示意图;
图4为本申请实施例提供的一种虚拟现实场景确定云端服务器的结构示意图。
具体实施方式
为了使本领域普通人员更好地理解本公开的技术方案,下面将结合附图,对本公开实施例中的技术方案进行清楚、完整地描述。
需要说明的是,本公开的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的本公开的实施例能够以除了在这里图示或描述的那些以外的顺序实施。
本文提供了一种虚拟现实场景确定方法,应用于客户端设备,所述客户端设备和云端服务器组成虚拟现实系统,如图1所示,该方法包括:
步骤101、获取用户当前第一时刻的第一位姿信息和第一时刻的第一时刻信息;
步骤102、将所述第一位姿信息和所述第一时刻信息发送给云端服务器。
在一种示例性实例中,时刻信息具体可以是时间戳,客户端设备捕捉用户当前第一时刻的第一位姿信息,并为获得的第一位姿信息添加时间戳,将添加了时间戳的第一位姿信息发送给云端服务器。
在一种示例性实例中,位姿信息包括通过客户端设备的定位器获得的位置信息与通过客户端设备上的传感器获得的姿态信息。其中,所述位置信息包括沿X、Y、Z三个直角坐标轴方向的位置信息,所述姿态信息包括绕X、Y、Z三个直角坐标轴方向的姿态信息Pitch、Yaw、Roll,其中,Pitch是围绕X轴旋转的俯仰角,Yaw是围绕Y轴旋转的偏航角,Roll是围绕Z轴旋转的翻滚角。通常将沿X、Y、Z三个直角坐标轴方向的位置信息和绕X、Y、Z三个直角坐标轴方向的姿态信息Pitch、Yaw、Roll合称为六自由度信息。
在一种示例性实例中,客户端设备可以包括:头戴显示器和交互设备,所述交互设备可以包括手柄、键盘、智能指套,其中,头戴显示器包括以下几种,一种是移动端头戴显示器,即盒子+手机,其中,盒子只是一个VR外壳,结构简单、价格低廉,需要把手机放到里面充当屏幕和运算硬件,才能工作;一种是PC端头戴显示器,需要连接一台高性能电脑才能运作,借助外部硬件协助运算,用户体验较好,具备独立屏幕,但是产品结构复杂,并且受数据线的束缚,自己无法自由活动;一种是一体式头戴显示器,依赖设备内置硬件完成运算,具备独立运算、输入和输出的功能,无需借助任何输入输出设备,就可以在虚拟的世界里尽情感受三维(3D) 立体感带来的视觉冲击;一种是外接移动端头戴显示器,智能手机或平板等外接于头戴式显示器;一种是外接处理端头戴式显示器,处理端外界与头戴式显示器。
步骤103、接收所述云端服务器返回的预测位姿信息以及预测渲染场景。
其中,所述预测位姿信息由所述云端服务器根据所述第一位姿信息、所述第一时刻信息以及网络状态进行预测得到,所述预测渲染场景由所述云端服务器根据所述预测位姿信息进行场景渲染得到。
在一种示例性实例中,客户端设备接收的预测渲染场景是图像帧码流信息的形式。客户端设备接收到图像帧码流信息后,将图像帧码流信息进行解码处理。
步骤104、获取所述用户在第二时刻的第二位姿信息;
步骤105、根据所述第二位姿信息、所述预测位姿信息和所述预测渲染场景确定显示给所述用户的虚拟现实场景。
在一种示例性实例中,所述渲染方法还包括如下步骤:将所述虚拟现实场景进行显示。
本申请实施例提供的虚拟现实场景确定方法,由于预测渲染场景的生成过程引入了对用户位姿信息的预测,使得预测渲染场景能够实现更精确地获取,因此减少了视场角(Field of view,FOV)的冗余,从而使得客户端数据处理量少、数据损耗小且最终截取的渲染场景分辨率高。
在一种示例性实例中,根据所述第二位姿信息、所述预测位姿信息和所述预测渲染场景确定显示给所述用户的虚拟现实场景,包括:
首先,根据所述第二位姿信息和所述预测位姿信息获取所述客户端设备中渲染相机的调整信息;其中,所述调整信息包括:旋转角度和位移向量。
其次,根据获得的调整信息调整所述渲染相机;
最后,根据调整后的所述渲染相机截取所述预测渲染场景,得到所述虚拟现实场景。
在一种示例性实例中,客户端设备中渲染相机的视场角是固定的,调整渲染相机的旋转角度和位移向量截取预测渲染场景,可以得到第二时刻的第二位姿信息对应的渲染场景,即虚拟现实场景。
在一种示例性实例中,所述位姿信息包括:旋转信息和位置信息,根据所述第二位姿信息和所述预测位姿信息获取所述客户端设备中渲染相机的调整信息,包括:
首先,设置所述渲染相机的位置为坐标系原点且正对待渲染画面。
其次,根据所述预测位姿信息与所述第二位姿信息中的旋转信息计算角度差,以得到所述渲染相机的旋转角度。
最后,根据所述预测位姿信息与所述第二位姿信息中的位置信息计算所述渲染相机的位移向量。
在一种示例性实例中,根据所述预测位姿信息与所述第二位姿信息中的旋转信息计算角度差,以得到所述渲染相机的旋转角度,包括:
首先,将所述预测位姿信息与所述第二位姿信息中的旋转信息转化为四元数。
其次,通过获得的两组四元数计算角度差,以得到所述渲染相机的旋转角度。
本文还提供了一种虚拟现实场景确定方法,应用于云端服务器,所述云端服务器和客户端 设备组成虚拟现实系统,包括:如图2所示,该方法包括:
步骤201、接收客户端设备发送的用户在第一时刻的第一位姿信息和第一时刻的第一时刻信息。
步骤202、根据所述第一位姿信息、所述第一时刻信息以及网络状态计算预测位姿信息。
步骤203、根据所述预测位姿信息进行场景渲染,确定预测渲染场景;
步骤204、将所述预测位姿信息和所述预测渲染场景发送至所述客户端设备,以便于所述客户端设备根据用户在第二时刻的第二位姿信息、所述预测位姿信息和所述预测渲染场景确定显示给所述用户的虚拟现实场景。
本申请实施例提供的渲染方法,由于预测渲染场景的生成过程引入了对用户位姿信息的预测,使得预测渲染场景能够实现更精确地获取,因此减少了视场角的冗余,从而使得客户端数据处理量少、数据损耗小且最终截取的渲染场景分辨率高。
在一种示例性实例中,根据所述第一位姿信息、所述第一时刻信息以及网络状态计算预测位姿信息,包括:
首先,根据所述第一位姿信息,采用机器学习算法预测用户的运动轨迹。
其次,根据所述第一时刻信息以及当前网络状态预测第三时刻的第三时刻信息,所述第三时刻信息为所述云端服务器预测的将所述预测渲染场景返回至所述客户端设备时的时刻信息。
最后,根据预测的所述运动轨迹以及所述第三时刻信息计算所述预测位姿信息。
在一种示例性实例中,所述预测渲染场景的视场角大于客户端的视场角。
需要说明的是,本文中的第一时刻可以理解为初始时刻,所述第一位姿信息可以理解为初始位姿信息;本文中的第二时刻可以理解为所述客户端设备接收到所述云端服务器的预测渲染场景的时刻,第二位姿信息可以理解为在该时刻的位姿信息;本文中的第三时刻为所述云端服务器预测的将所述预测渲染场景返回至所述客户端设备的时刻或者说为所述云端服务器预测的所述客户端设备接收到所述预测渲染场景的时刻,这一预测是所述云端服务器基于当前网络状态和所述第一时刻信息确定的。
本文还提供了一种虚拟现实场景确定装置,应用于客户端设备,如图3所示,该应用于客户端设备的虚拟现实场景确定装置3包括:
第一获取模块31,被配置为获取用户在第一时刻的第一位姿信息和第一时刻的第一时刻信息。
发送模块32,被配置为将所述第一位姿信息和所述第一时刻信息发送给云端服务器。
接收模块33,被配置为接收所述云端服务器返回的预测位姿信息以及预测渲染场景;其中,所述预测位姿信息由所述云端服务器根据所述第一位姿信息、所述第一时刻信息以及网络状态进行预测得到,所述预测渲染场景由所述云端服务器根据所述预测位姿信息进行场景渲染得到。
第二获取模块34,被配置为获取所述用户在第二时刻的第二位姿信息。
确定模块35,被配置为根据所述第二位姿信息、所述预测位姿信息和所述预测渲染场景确定显示给所述用户的虚拟现实场景。
在一种示例性实例中,确定模块35具体用于:
根据所述第二位姿信息和所述预测位姿信息获取所述客户端设备中渲染相机的调整信息;其中,所述调整信息包括:旋转角度和位移向量。
根据获得的所述调整信息调整所述渲染相机。
根据调整后的所述渲染相机截取所述预测渲染场景,得到所述虚拟现实场景。
在一种示例性实例中,所述位姿信息包括:旋转信息和位置信息。
在一种示例性实例中,确定模块35具体还用于:
设置所述渲染相机的位置为坐标系原点且正对待渲染画面。
根据所述预测位姿信息与所述第二位姿信息中的旋转信息计算角度差,以得到所述渲染相机的旋转角度。
根据所述预测位姿信息与所述第二位姿信息中的位置信息计算所述渲染相机的位移向量。
在一种示例性实例中,确定模块35具体还用于:
将所述预测位姿信息与所述第二位姿信息中的旋转信息转化为四元数。
通过获得的两组四元数计算角度差,以得到所述渲染相机的旋转角度。
本申请实施例提供的应用于客户端设备的虚拟现实场景确定装置,由于预测渲染场景的生成过程引入了对用户位姿信息的预测,使得预测渲染场景能够实现更精确地获取,因此减少了视场角(Field of view,FOV)的冗余,从而使得客户端数据处理量少、数据损耗小且最终截取的渲染场景分辨率高。
本文还提供了一种虚拟现实场景确定装置,应用于云端服务器,如图4所示,该应用于云端服务器的虚拟现实场景确定装置4包括:
接收模块41,被配置为接收客户端设备发送的用户在第一时刻的第一位姿信息和第一时刻的第一时刻信息。
计算模块42,被配置为根据所述第一位姿信息、所述第一时刻信息以及网络状态计算预测位姿信息。
确定模块43,被配置为根据所述预测位姿信息进行场景渲染,确定预测渲染场景。
发送模块44,被配置为将所述预测位姿信息和所述预测渲染场景发送至所述客户端设备,以便于所述客户端设备根据用户在第二时刻的第二位姿信息、所述预测位姿信息和所述预测渲染场景确定显示给所述用户的虚拟现实场景。
在一种示例性实例中,计算模块42具体用于:
根据所述第一位姿信息,采用机器学习算法预测用户的运动轨迹。
根据所述第一时刻信息以及当前网络状态预测第三时刻的第三时刻信息,所述第三时刻信息为所述云端服务器预测的将所述预测渲染场景返回至所述客户端设备时的时刻信息。
根据预测的所述运动轨迹以及所述第三时刻信息计算所述预测位姿信息。
在一种示例性实例中,所述预测渲染场景的视场角大于客户端设备的视场角。
本申请实施例提供的应用于云端服务器的虚拟现实场景确定,由于预测渲染场景的生成过程引入了对用户位姿信息的预测,使得预测渲染场景能够实现更精确地获取,因此减少了视场 角的冗余,从而使得客户端数据处理量少、数据损耗小且最终截取的渲染场景分辨率高。
本文还提供了一种客户端设备,包括第一存储器和第一处理器,所述第一存储器存储上保存有计算机程序,所述第一存储器上的计算机程序被所述第一处理器执行时执行如上述任意一种以客户端设备为执行主体的渲染方法。
本文还提供了一种云端服务器,包括第二存储器和第二处理器,所述第二存储器存储上保存有计算机程序,所述第二存储器上的计算机程序被所述第二处理器执行时执行如上述任意一种以云端服务器为执行主体的渲染方法。
本文还提供了一种虚拟现实系统,包括上述的一种客户端设备以及上述的一种云端服务器。
本申请描述了多个实施例,但是该描述是示例性的,而不是限制性的,并且对于本领域的普通技术人员来说显而易见的是,在本申请所描述的实施例包含的范围内可以有更多的实施例和实现方案。尽管在附图中示出了许多可能的特征组合,并在具体实施方式中进行了讨论,但是所公开的特征的许多其它组合方式也是可能的。除非特意加以限制的情况以外,任何实施例的任何特征或元件可以与任何其它实施例中的任何其他特征或元件结合使用,或可以替代任何其它实施例中的任何其他特征或元件。
本申请包括并设想了与本领域普通技术人员已知的特征和元件的组合。本申请已经公开的实施例、特征和元件也可以与任何常规特征或元件组合,以形成由权利要求限定的独特的发明方案。任何实施例的任何特征或元件也可以与来自其它发明方案的特征或元件组合,以形成另一个由权利要求限定的独特的发明方案。因此,应当理解,在本申请中示出和/或讨论的任何特征可以单独地或以任何适当的组合来实现。因此,除了根据所附权利要求及其等同替换所做的限制以外,实施例不受其它限制。此外,可以在所附权利要求的保护范围内进行各种修改和改变。
此外,在描述具有代表性的实施例时,说明书可能已经将方法和/或过程呈现为特定的步骤序列。然而,在该方法或过程不依赖于本文所述步骤的特定顺序的程度上,该方法或过程不应限于所述的特定顺序的步骤。如本领域普通技术人员将理解的,其它的步骤顺序也是可能的。因此,说明书中阐述的步骤的特定顺序不应被解释为对权利要求的限制。此外,针对该方法和/或过程的权利要求不应限于按照所写顺序执行它们的步骤,本领域技术人员可以容易地理解,这些顺序可以变化,并且仍然保持在本申请实施例的精神和范围内。
本领域普通技术人员可以理解,上文中所公开方法中的全部或某些步骤、系统、装置中的功能模块/单元可以被实施为软件、固件、硬件及其适当的组合。在硬件实施方式中,在以上描述中提及的功能模块/单元之间的划分不一定对应于物理组件的划分;例如,一个物理组件可以具有多个功能,或者一个功能或步骤可以由若干物理组件合作执行。某些组件或所有组件可以被实施为由处理器,如数字信号处理器或微处理器执行的软件,或者被实施为硬件,或者被实施为集成电路,如专用集成电路。这样的软件可以分布在计算机可读介质上,计算机可读介质可以包括计算机存储介质(或非暂时性介质)和通信介质(或暂时性介质)。如本领域普通技术人员公知的,术语计算机存储介质包括在用于存储信息(诸如计算机可读指令、数据结 构、程序模块或其他数据)的任何方法或技术中实施的易失性和非易失性、可移除和不可移除介质。计算机存储介质包括但不限于RAM、ROM、EEPROM、闪存或其他存储器技术、CD-ROM、数字多功能盘(DVD)或其他光盘存储、磁盒、磁带、磁盘存储或其他磁存储装置、或者可以用于存储期望的信息并且可以被计算机访问的任何其他的介质。此外,本领域普通技术人员公知的是,通信介质通常包含计算机可读指令、数据结构、程序模块或者诸如载波或其他传输机制之类的调制数据信号中的其他数据,并且可包括任何信息递送介质。

Claims (12)

  1. 一种虚拟现实场景确定方法,应用于客户端设备,其特征在于,所述方法包括:
    获取用户在第一时刻的第一位姿信息和第一时刻的第一时刻信息;
    将所述第一位姿信息和所述第一时刻信息发送给云端服务器;
    接收所述云端服务器返回的预测位姿信息以及预测渲染场景;其中,所述预测位姿信息由所述云端服务器根据所述第一位姿信息、所述第一时刻信息以及网络状态进行预测得到,所述预测渲染场景由所述云端服务器根据所述预测位姿信息进行场景渲染得到;
    获取所述用户在第二时刻的第二位姿信息;
    根据所述第二位姿信息、所述预测位姿信息和所述预测渲染场景确定显示给所述用户的虚拟现实场景。
  2. 根据权利要求1所述的方法,其特征在于,根据所述第二位姿信息、所述预测位姿信息和所述预测渲染场景确定显示给所述用户的虚拟现实场景,包括:
    根据所述第二位姿信息和所述预测位姿信息获取所述客户端设备中渲染相机的调整信息;其中,所述调整信息包括:旋转角度和位移向量;
    根据获得的所述调整信息调整所述渲染相机;
    根据调整后的所述渲染相机截取所述预测渲染场景,得到所述虚拟现实场景。
  3. 根据权利要求2所述的方法,其特征在于,所述位姿信息包括:旋转信息和位置信息,根据所述第二位姿信息和所述预测位姿信息获取所述客户端设备中渲染相机的调整信息,包括:
    设置所述渲染相机的位置为坐标系原点且正对待渲染画面;
    根据所述预测位姿信息与所述第二位姿信息中的旋转信息计算角度差,以得到所述渲染相机的旋转角度;
    根据所述预测位姿信息与所述第二位姿信息中的位置信息计算所述渲染相机的位移向量。
  4. 根据权利要求3所述的方法,其特征在于,根据所述预测位姿信息与所述第二位姿信息中的旋转信息计算角度差,以得到所述渲染相机的旋转角度,包括:
    将所述预测位姿信息与所述第二位姿信息中的旋转信息转化为四元数;
    通过获得的两组四元数计算角度差,以得到所述渲染相机的旋转角度。
  5. 一种虚拟现实场景确定方法,应用于云端服务器,其特征在于,所述方法包括:
    接收客户端设备发送的用户在第一时刻的第一位姿信息和第一时刻的第一时刻信息;
    根据所述第一位姿信息、所述第一时刻信息以及网络状态计算预测位姿信息;
    根据所述预测位姿信息进行场景渲染,确定预测渲染场景;
    将所述预测位姿信息和所述预测渲染场景发送至所述客户端设备,以便于所述客户端设备根据用户在第二时刻的第二位姿信息、所述预测位姿信息和所述预测渲染场景确定显示给所述用户的虚拟现实场景。
  6. 根据权利要求5所述的方法,其特征在于,根据所述第一位姿信息、所述第一时刻信 息以及网络状态计算预测位姿信息,包括:
    根据所述第一位姿信息,采用机器学习算法预测用户的运动轨迹;
    根据所述第一时刻信息以及当前网络状态预测第三时刻的第三时刻信息,所述第三时刻信息为所述云端服务器预测的将所述预测渲染场景返回至所述客户端设备时的时刻信息;
    根据预测的所述运动轨迹以及所述第三时刻信息计算所述预测位姿信息。
  7. 根据权利要求5所述的方法,其特征在于,所述预测渲染场景的视场角大于客户端设备的视场角。
  8. 一种虚拟现实场景确定装置,应用于客户端设备,其特征在于,包括:
    第一获取模块,被配置为获取用户在第一时刻的第一位姿信息和第一时刻的第一时刻信息;
    发送模块,被配置为将所述第一位姿信息和所述第一时刻信息发送给云端服务器;
    接收模块,被配置为接收所述云端服务器返回的预测位姿信息以及预测渲染场景;其中,所述预测位姿信息由所述云端服务器根据所述第一位姿信息、所述第一时刻信息以及网络状态进行预测得到,所述预测渲染场景由所述云端服务器根据所述预测位姿信息进行场景渲染得到;
    第二获取模块,被配置为获取所述用户在第二时刻的第二位姿信息;
    确定模块,被配置为根据所述第二位姿信息、所述预测位姿信息和所述预测渲染场景确定显示给所述用户的虚拟现实场景。
  9. 一种虚拟现实场景确定装置,应用于云端服务器,其特征在于,包括:
    接收模块,被配置为接收客户端设备发送的用户在第一时刻的第一位姿信息和第一时刻的第一时刻信息;
    计算模块,被配置为根据所述第一位姿信息、所述第一时刻信息以及网络状态计算预测位姿信息;
    确定模块,被配置为根据所述预测位姿信息进行场景渲染,确定预测渲染场景;
    发送模块,被配置为将所述预测位姿信息和所述预测渲染场景发送至所述客户端设备,以便于所述客户端设备根据用户在第二时刻的第二位姿信息、所述预测位姿信息和所述预测渲染场景确定显示给所述用户的虚拟现实场景。
  10. 一种客户端设备,其特征在于,包括第一存储器和第一处理器,所述第一存储器存储上保存有计算机程序,所述第一存储器上的计算机程序被所述第一处理器执行时执行如上述权利要求1-4任一项所述的方法。
  11. 一种云端服务器,其特征在于,包括第二存储器和第二处理器,所述第二存储器存储上保存有计算机程序,所述第二存储器上的计算机程序被所述第二处理器执行时执行如上述权利要求5-7任一项所述的方法。
  12. 一种虚拟现实系统,其特征在于,包括如权利要10所述的客户端设备,以及如权利要求11所述的云端服务器。
PCT/CN2022/142395 2022-06-14 2022-12-27 虚拟现实场景确定方法、装置及系统 WO2023240999A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210674778.5 2022-06-14
CN202210674778.5A CN115131528A (zh) 2022-06-14 2022-06-14 虚拟现实场景确定方法、装置及系统

Publications (1)

Publication Number Publication Date
WO2023240999A1 true WO2023240999A1 (zh) 2023-12-21

Family

ID=83377560

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/142395 WO2023240999A1 (zh) 2022-06-14 2022-12-27 虚拟现实场景确定方法、装置及系统

Country Status (2)

Country Link
CN (1) CN115131528A (zh)
WO (1) WO2023240999A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115131528A (zh) * 2022-06-14 2022-09-30 北京凌宇智控科技有限公司 虚拟现实场景确定方法、装置及系统

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180101930A1 (en) * 2016-01-07 2018-04-12 Microsoft Technology Licensing, Llc View rendering from multiple server-side renderings
CN108351691A (zh) * 2015-10-26 2018-07-31 微软技术许可有限责任公司 用于虚拟图像的远程渲染
CN113936119A (zh) * 2020-06-28 2022-01-14 华为技术有限公司 一种数据渲染方法、系统及装置
CN115131528A (zh) * 2022-06-14 2022-09-30 北京凌宇智控科技有限公司 虚拟现实场景确定方法、装置及系统

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108351691A (zh) * 2015-10-26 2018-07-31 微软技术许可有限责任公司 用于虚拟图像的远程渲染
US20180101930A1 (en) * 2016-01-07 2018-04-12 Microsoft Technology Licensing, Llc View rendering from multiple server-side renderings
CN113936119A (zh) * 2020-06-28 2022-01-14 华为技术有限公司 一种数据渲染方法、系统及装置
CN115131528A (zh) * 2022-06-14 2022-09-30 北京凌宇智控科技有限公司 虚拟现实场景确定方法、装置及系统

Also Published As

Publication number Publication date
CN115131528A (zh) 2022-09-30

Similar Documents

Publication Publication Date Title
US11270460B2 (en) Method and apparatus for determining pose of image capturing device, and storage medium
CN111294665B (zh) 视频的生成方法、装置、电子设备及可读存储介质
CN111738220A (zh) 三维人体姿态估计方法、装置、设备及介质
US20130063560A1 (en) Combined stereo camera and stereo display interaction
CN104536579A (zh) 交互式三维实景与数字图像高速融合处理系统及处理方法
EP4156067A1 (en) Virtual clothing changing method and apparatus, and device and medium
CN110568923A (zh) 基于Unity3D的虚拟现实交互方法、装置、设备及存储介质
CN109144252B (zh) 对象确定方法、装置、设备和存储介质
US11449196B2 (en) Menu processing method, device and storage medium in virtual scene
CN112783700A (zh) 用于基于网络的远程辅助系统的计算机可读介质
CN115690382A (zh) 深度学习模型的训练方法、生成全景图的方法和装置
WO2023240999A1 (zh) 虚拟现实场景确定方法、装置及系统
JP2022550644A (ja) パススルー視覚化
CN111488056A (zh) 使用被跟踪的物理对象来操纵虚拟对象
Park et al. New design and comparative analysis of smartwatch metaphor-based hand gestures for 3D navigation in mobile virtual reality
CN107463261A (zh) 立体交互系统及方法
WO2024131479A1 (zh) 虚拟环境的显示方法、装置、可穿戴电子设备及存储介质
Orlosky et al. Effects of throughput delay on perception of robot teleoperation and head control precision in remote monitoring tasks
WO2024131405A1 (zh) 对象移动控制方法、装置、设备及介质
US20230405475A1 (en) Shooting method, apparatus, device and medium based on virtual reality space
WO2023277043A1 (ja) 情報処理装置
CN115272564B (zh) 动作视频发送方法、装置、设备和介质
CN118135090A (zh) 网格对齐方法、装置和电子设备
CN118343924A (zh) 虚拟对象运动处理方法、装置、设备及介质
CN117765207A (zh) 虚拟界面的显示方法、装置、设备及介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22946663

Country of ref document: EP

Kind code of ref document: A1