WO2023160694A1 - 一种输入设备的虚拟方法、装置、设备和存储介质 - Google Patents

一种输入设备的虚拟方法、装置、设备和存储介质 Download PDF

Info

Publication number
WO2023160694A1
WO2023160694A1 PCT/CN2023/078387 CN2023078387W WO2023160694A1 WO 2023160694 A1 WO2023160694 A1 WO 2023160694A1 CN 2023078387 W CN2023078387 W CN 2023078387W WO 2023160694 A1 WO2023160694 A1 WO 2023160694A1
Authority
WO
WIPO (PCT)
Prior art keywords
input device
virtual reality
data
dimensional
inertial sensor
Prior art date
Application number
PCT/CN2023/078387
Other languages
English (en)
French (fr)
Inventor
罗子雄
Original Assignee
北京所思信息科技有限责任公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京所思信息科技有限责任公司 filed Critical 北京所思信息科技有限责任公司
Publication of WO2023160694A1 publication Critical patent/WO2023160694A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/16Constructional details or arrangements
    • G06F1/1613Constructional details or arrangements for portable computers
    • G06F1/163Wearable computers, e.g. on a belt
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0346Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics

Definitions

  • the present disclosure relates to the field of data technology, and in particular to a virtual method, device, device and storage medium for an input device.
  • the technical problem to be solved in the present disclosure is to solve the existing problem that the model of the physical input device cannot be completely displayed in the virtual scene.
  • an embodiment of the present disclosure provides a virtual method for an input device, including:
  • the target information of the 3D model in the virtual reality system is updated.
  • the three-dimensional model is mapped to a virtual reality scene corresponding to the virtual reality system.
  • a virtual device for an input device including:
  • a first acquisition unit configured to acquire data input to the device
  • a determining unit configured to determine the target information of the three-dimensional model corresponding to the input device in the virtual reality system based on the data of the input device;
  • the second acquiring unit is used to acquire the data of the inertial sensor
  • the update unit is used to update the target information of the three-dimensional model in the virtual reality system according to the three-dimensional data of the inertial sensor.
  • the mapping unit is configured to map the 3D model to a virtual reality scene corresponding to the virtual reality system based on the updated target information.
  • an electronic device including:
  • the computer program is stored in the memory and is configured to be executed by the processor to realize the above-mentioned virtual method of the input device.
  • a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, the above-mentioned steps of the input device virtualization method are realized.
  • a computer program product includes a computer program or an instruction, and when the computer program or instruction is executed by a processor, the above-mentioned Virtual methods for the input devices described above.
  • the virtual method of the input device obtains the data of the input device, and then determines the target information of the 3D model corresponding to the input device in the virtual reality system based on the data of the input device, and at the same time obtains the information installed on the input device in real time.
  • the 3D data detected by the inertial sensor and then according to the 3D data detected by the inertial sensor, update the target information of the 3D model in the virtual reality system, and display the 3D model at the updated target information in the virtual reality scene.
  • the present disclosure provides a method for virtualizing an input device, which can accurately map an input device in a real space to a virtual display scene, and facilitate subsequent users to interact efficiently using the input device according to a three-dimensional model in the virtual reality scene.
  • FIG. 1 is a schematic diagram of an application scenario provided by an embodiment of the present disclosure
  • FIG. 2 is a schematic flowchart of a method for virtualizing an input device provided by an embodiment of the present disclosure
  • Fig. 3a is a schematic diagram of another application scenario provided by an embodiment of the present disclosure.
  • Fig. 3b is a schematic diagram of a virtual reality scene provided by an embodiment of the present disclosure.
  • FIG. 3c is a schematic diagram of another application scenario provided by an embodiment of the present disclosure.
  • Fig. 4 is a schematic flowchart of a method for virtualizing an input device provided by an embodiment of the present disclosure picture
  • FIG. 5 is a schematic structural diagram of a virtual device of an input device provided by an embodiment of the present disclosure.
  • FIG. 6 is a schematic structural diagram of an electronic device provided by an embodiment of the present disclosure.
  • FIG. 1 is a schematic diagram of an application scenario provided by an embodiment of the present disclosure.
  • FIG. 1 includes a head-mounted display 110.
  • the head-mounted display 110 may be an all-in-one machine.
  • the virtual reality software system is configured on the head-mounted display 110, and the head-mounted display 110 can also be connected to a server, and the virtual reality software system is configured on the server.
  • the following embodiments take the configuration of the virtual reality software system on the head-mounted display as an example, and describe the virtual method of the input device provided by the present disclosure in detail.
  • the head-mounted display device is connected to the input device, and the input device may specifically be mouse or keyboard.
  • the embodiment of the present disclosure provides a virtual method for input devices.
  • the present invention calculates the physical input by acquiring the three-dimensional data composed of the magnetic force, gyroscope and acceleration of the inertial sensor fixed inside or outside the physical input device.
  • the posture information and position information of the device so that the 3D model corresponding to the physical input device is displayed in the virtual scene, so that the user can use the physical input device through the 3D model to perform efficient input operations.
  • the virtual method of the input device provided by this disclosure is not affected by occlusion, and can effectively solve the problem that the camera captures the image or the detection sensor is occluded in the existing method, namely Makes completely obscuring physical input devices work as well.
  • the method for virtualizing an input device will be described in detail through one or more of the following embodiments.
  • Fig. 2 is a schematic flowchart of a method for virtualizing an input device provided by an embodiment of the present disclosure, which is applied to a virtual reality system, and specifically includes the following steps S210 to S240 as shown in Fig. 2 :
  • the virtual reality software system can be configured in the head-mounted display, and the virtual reality software system can process the received input signal or data transmitted by the input device, and return the processing result to the head-mounted display.
  • the display screen is displayed, and then the display screen changes the display state of the input device in the virtual reality scene in real time according to the processing result.
  • FIG. 3 a is a schematic diagram of another application scenario provided by an embodiment of the present disclosure.
  • FIG. 3 a includes a mouse 310 , a head-mounted display 320 , and a user's hand 330 .
  • the mouse 310 includes a left button 311, a scroll wheel 312, a right button 313 and an inertial sensor 314.
  • the inertial sensor 314 is a black box on the mouse 310 in FIG.
  • the head-mounted display 320, the hand 330 operates the mouse 310, and the mouse 310 is connected to the head-mounted display 320 at the same time.
  • 340 in FIG. 3b is a scene constructed in the head-mounted display 320 in FIG.
  • the user understands and manipulates the mouse device 310 by watching the mouse model 350 corresponding to the mouse device 310 displayed in the virtual reality scene 340, so that the user can see that in the virtual reality scene 340, the three-dimensional model 360 corresponding to the user's hand 330 operates
  • the mouse model 350 corresponding to the mouse 310, the operation interface 370 is an interface for the mouse to operate, similar to the display screen of the terminal, the operation situation of the hand model 360 operating the mouse model 350 in the virtual reality scene 340 and the actual use of the mouse by the user's hand 330
  • the operation of 310 can be synchronized to a certain extent, which is equivalent to that the user directly sees the components in the mouse and performs subsequent operations, which improves the user experience and the interaction speed. Understandably, the virtual method of the input device provided by the following embodiments is described by taking the application scenario shown in FIG. Virtual methods are described in detail.
  • FIG. 3c which is another application field provided by an embodiment of the present disclosure 3c includes a keyboard 380, a head-mounted display 320, and a user's hand 330.
  • the application scene of the keyboard 380 is the same as that of the mouse 310 in FIG. 3a, and details are not repeated here.
  • the virtual reality software system acquires the data of the input device in real time, wherein the data of the input device specifically includes the configuration information of the input device, the input signal, and the image of the input device, etc., wherein the configuration information includes model information, and the model information refers to Enter the model number of the device.
  • the model information of the input device may also be obtained; according to the model information, the 3D model corresponding to the input device is determined.
  • the virtual reality software system can determine the mouse model in the virtual reality system based on the input signal of the mouse device or the image of the mouse device.
  • the target information in where the target information includes position information and attitude information.
  • the head mounted display 320 shown in FIG. Based on the positional relationship of the head-mounted display worn on the body, a space is constructed, which can be called a target space, and the mouse and the user's hand are within the determined target space. It is understandable that the scene displayed in the virtual reality scene is the scene in the target space.
  • the target information is the position information and attitude information in the target space.
  • the target information of the 3D model corresponding to the input device in the virtual reality system specifically including:
  • the input signal determines the target information of the 3D model corresponding to the input device in the virtual reality system.
  • the virtual reality software system can determine the target information of the mouse model in the virtual reality system according to the acquired input signal of the mouse device.
  • the mouse model is displayed at the target information.
  • the posture of the mouse model displayed in the virtual reality scene is the same as the posture of the mouse device in the real space.
  • determining the target information of the 3D model corresponding to the input device in the virtual reality system based on the data of the input device in the above S220 may also include: determining the 3D model corresponding to the input device in the virtual reality system based on the image of the input device Target information in .
  • the virtual reality software system can also determine the target information of the mouse model in the virtual reality system according to the acquired image of the mouse device, so as to display the mouse model at the target information in the virtual reality scene.
  • the posture of the mouse model displayed in the virtual reality scene is the same as the posture of the mouse device in the real space.
  • the image of the mouse device may be generated by real-time shooting by a camera installed on the head-mounted display 320, where the camera may be an infrared camera, a color camera or a grayscale camera.
  • the camera installed on the head-mounted display 320 in FIG. 3a can capture images including the mouse 310, and transmit the images to the virtual reality software system in the head-mounted display for processing.
  • the target information of the mouse model corresponding to the mouse device in the virtual reality system can be determined through the above two ways of identifying the input signal of the mouse device and/or the button in the image of the mouse device, and the above two methods can be selected. Either way, or choose two ways at the same time, to determine the target information of the mouse model in the virtual reality system, which can effectively avoid the failure to capture the complete image of the mouse device, or fail to receive the input signal of the mouse device normally. , and can continue to perform interactive operations to improve usability.
  • the target information of the mouse model determined in the above two ways in the virtual reality system can be regarded as the initial target information corresponding to the following mouse device, and the initial target information can also be called the initial position.
  • the 3D model is mapped in the virtual reality scene constructed by the virtual reality system.
  • the mouse model may be displayed at the target information in the virtual reality scene, that is, the mouse model is displayed at the determined initial target information.
  • the mouse device is pre-configured with an inertial sensor, and the inertial sensor will collect three-dimensional data about the mouse device in real time; among them, the inertial sensor is also called an inertial measurement unit (Inertial Measurement Unit, IMU), which is used to measure the three-axis attitude angle of an object. and acceleration devices.
  • IMU Inertial Measurement Unit
  • the data of the inertial sensor can also be understood as including three sets of data including a three-axis gyroscope, a three-axis accelerometer, and a three-axis magnetometer.
  • Each set of data includes data in the three directions of XYZ, that is, includes 9 Data
  • the three-axis gyroscope is used to measure the angular velocity of the mouse device on the three axes
  • the three-axis accelerometer is used to measure the acceleration of the mouse device on the three axes
  • the three-axis magnetometer is used to provide the orientation of the mouse device on the three axes, the above nine
  • the data constitutes positioning information, and according to the positioning information and the initial target information, the target information of the mouse model in the virtual reality system can be accurately determined.
  • the inertial sensor configured on the input device includes at least one of the following situations: the inertial sensor is configured on the surface of the input device; the inertial sensor is configured inside the input device.
  • the inertial sensor can be configured on the surface of the mouse device, such as the scene shown in Figure 3a, the inertial sensor is configured on the surface of a common mouse device, such as the upper right corner, and the inertial sensor at this time can be understood as an independent device that is not controlled by the mouse , with a power module, etc., can be directly installed on the mouse device.
  • the inertial sensor can also be configured inside the mouse device, for example, in an internal circuit of the mouse device. In this case, it can be understood as a mouse device with an inertial sensor.
  • the mouse device in the real space may move.
  • Target information in the system which is determined relative to the initial target information.
  • the mouse model is displayed at the newly determined target information in the virtual reality scene, wherein the virtual reality scene displays the mouse model in the target space scene.
  • the embodiment of the present disclosure provides a virtual method for an input device.
  • the target information of the 3D model corresponding to the input device in the virtual reality system is determined, and at the same time, the target information on the input device is obtained in real time.
  • the 3D data detected by the installed inertial sensor, and then according to the 3D data detected by the inertial sensor update the target information of the 3D model in the virtual reality system, and display the 3D model at the updated target information in the virtual reality scene .
  • the method for virtualizing an input device disclosed in the present disclosure can accurately map an input device in a real space to a virtual reality scene, and facilitate subsequent users to efficiently interact with the input device according to a three-dimensional model in the virtual reality scene.
  • FIG. 4 is a schematic flowchart of a virtual method for an input device provided by an embodiment of the present disclosure.
  • the target information includes spatial position information
  • the spatial position information refers to the input device in the target space.
  • the inertial sensor acquires the input device in real time relative to the The trajectory and attitude of a certain initial position, that is, the data collected by the inertial sensor needs to be given an initial position to clarify the specific starting point or standard of the trajectory and attitude collected later.
  • the inertial sensor will also collect the data of the mouse device in real time, but the collected data has no reference object, and may only include trajectory and attitude information such as translation to the right, but it is impossible to accurately determine where it came from To the right and the specific position after translation, so it is necessary to determine an initial spatial position to accurately determine the specific position of the mouse device after moving.
  • the initial spatial position is in the target space constructed above, and the specific position is also in the same in the target space.
  • the three-dimensional acceleration data and the three-dimensional gyroscope data of the inertial sensor calculate the relative amount of position movement of the input device in the three directions of the space coordinate system.
  • the three-dimensional data includes three-dimensional magnetic force data, three-dimensional acceleration data and three-dimensional gyroscope data, and the relative position movement of the input device in three directions under the spatial coordinate system of the target space is calculated.
  • the relative amount of position movement is the moving distance of the input device in the three directions of XYZ in the target space.
  • the data of the inertial sensor can also be understood as the distance variation based on the initial spatial position.
  • the method further includes: updating the initial spatial position; and correcting calculation errors according to the updated initial spatial position.
  • the model When the model updates the target information, it usually accumulates calculation errors. Calculation errors can be corrected by re-determining the initial spatial position.
  • the method for updating the initial spatial position is as described above. Specifically, the initial spatial position can be obtained by image recognition and/or pressing a button, which will not be repeated here.
  • the target information of the mouse device in the virtual reality system can be determined for five subsequent times, based on the initial target information A and the data of the inertial sensor.
  • the initial spatial position B is re-determined, and based on The initial spatial position B corrects the error generated based on the calculation of the initial spatial position A, that is, the calculation error is periodically corrected according to the initial spatial position.
  • the target information also includes attitude information; update the target information of the 3D model in the virtual reality system according to the 3D data of the inertial sensor, including: 3D magnetic data, 3D acceleration data and 3D gyroscope data based on the inertial sensor
  • the relative spatial position relationship of the sensor on the input device updates the posture information of the 3D model in the virtual reality system.
  • the target information also includes attitude information
  • the method for determining the attitude information of the input device in the target space according to the three-dimensional data specifically includes: according to the three-dimensional magnetic force data, three-dimensional acceleration data and three-dimensional gyroscope data of the inertial sensor and the inertial sensor.
  • the relative spatial position relationship on the device updates the posture information of the 3D model in the virtual reality system.
  • the relative spatial position relationship of the inertial sensor on the input device refers to the specific position of the sensor on the input device.
  • the inertial sensor is at 314 It is arranged on the upper right of the surface of the mouse 310, that is, to establish a corresponding relationship between the inertial sensor on the input device and the target space, so as to calculate the attitude information of the 3D model corresponding to the input device in the target space. It can be understood that, in the process of calculating the pose information of the 3D model, the initial spatial position of the device does not need to be input.
  • An embodiment of the present disclosure provides a virtual method for an input device. After determining the initial spatial position of the 3D model in the virtual reality scene, the acquired 3D data of the inertial sensor is based on the initial spatial position, and the 3D model is re-determined in the virtual reality.
  • the target information in the system is used to update the display state of the 3D model in the virtual reality scene in real time, quickly and accurately according to the state of the input device in the real space, so as to facilitate subsequent operations.
  • FIG. 5 is a schematic structural diagram of a virtual device of an input device provided by an embodiment of the present disclosure.
  • the virtual device of the input device provided by the embodiment of the present disclosure can execute the processing flow provided by the above embodiment of the virtual method of the input device.
  • the device 500 includes:
  • the first acquiring unit 510 is configured to acquire data of the input device
  • the determination unit 520 is configured to determine the target information of the three-dimensional model corresponding to the input device in the virtual reality system based on the data of the input device;
  • the second acquiring unit 530 is configured to acquire three-dimensional data of the inertial sensor
  • An update unit 540 configured to update the target information of the three-dimensional model in the virtual reality system according to the three-dimensional data of the inertial sensor;
  • the mapping unit 550 is configured to map the 3D model to a virtual reality scene corresponding to the virtual reality system based on the updated target information.
  • the target information in the apparatus 500 includes attitude information.
  • the update unit 540 updates the target information of the three-dimensional model in the virtual reality system according to the three-dimensional data of the inertial sensor, specifically for:
  • the target information in apparatus 500 also includes spatial location information.
  • the update unit 540 updates the target information of the three-dimensional model in the virtual reality system according to the three-dimensional data of the inertial sensor, specifically for:
  • the spatial position information of the 3D model in the virtual reality system is used as the initial spatial position
  • three-dimensional magnetic data, three-dimensional acceleration data and three-dimensional gyroscope data of the inertial sensor calculate the relative amount of the position movement of the input device in the three directions of the space coordinate system
  • the spatial position information of the three-dimensional model in the virtual reality system is updated according to the initial spatial position and the relative movement of the input device in the three directions of the spatial coordinate system.
  • the inertial sensor configured on the input device in device 500 includes at least one of the following conditions:
  • the inertial sensor is arranged on the surface of the input device
  • the inertial sensor is arranged inside the input device.
  • the apparatus 500 further includes a correction unit, configured to update the initial spatial position; and correct calculation errors according to the updated initial spatial position.
  • a correction unit configured to update the initial spatial position; and correct calculation errors according to the updated initial spatial position.
  • the virtual device of the input device in the embodiment shown in FIG. 5 can be used to implement the technical solution of the above method embodiment, and its implementation principle and technical effect are similar, and will not be repeated here.
  • FIG. 6 is a schematic structural diagram of an electronic device provided by an embodiment of the present disclosure.
  • the electronic device provided by the embodiments of the present disclosure can execute the processing flow provided by the above embodiments. As shown in FIG. And it is configured to be executed by the processor 610 as the above-mentioned virtual method of the input device.
  • an embodiment of the present disclosure further provides a computer-readable storage medium, on which a computer program is stored, and the computer program is executed by a processor to implement the virtualization method of the input device in the above-mentioned embodiment.
  • an embodiment of the present disclosure also provides a computer program product, where the computer program product includes a computer program or an instruction, and when the computer program or instruction is executed by a processor, the above input device virtualization method is implemented.
  • the virtual method of the input device can effectively calculate the posture information and position information of the physical input device, and can well display the 3D model corresponding to the physical input device in the virtual scene, so as to obtain a better virtual reality interactive experience , has strong industrial applicability.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Computer Hardware Design (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • User Interface Of Digital Computer (AREA)
  • Position Input By Displaying (AREA)

Abstract

本公开涉及一种输入设备的虚拟方法、装置、设备和存储介质。方法具体包括:获取输入设备的数据,随后基于输入设备的数据,确定输入设备对应的三维模型在虚拟现实系统中的目标信息,同时实时获取输入设备上安装的惯性传感器所检测到的三维数据,再根据惯性传感器所检测到的三维数据,更新三维模型在虚拟现实系统中的目标信息,并在虚拟现实场景中更新后的目标信息处显示三维模型。本公开提供的输入设备的虚拟方法,能够将现实空间中的输入设备准确的虚拟到虚拟现实场景中,便于后续用户根据虚拟现实场景中的三维模型,高效的使用输入设备进行交互。

Description

一种输入设备的虚拟方法、装置、设备和存储介质
本公开要求于2022年2月28日提交中国专利局、申请号为202210185778.9、发明名称为“一种输入设备的虚拟方法、装置、设备和存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本公开中。
技术领域
本公开涉及数据技术领域,尤其涉及一种输入设备的虚拟方法、装置、设备和存储介质。
背景技术
目前,虚拟场景应用广泛,要将现实中的实体输入设备对应的模型映射到虚拟场景中,需要确定模型的形态和位置,现阶段主要通过彩色或红外等各类摄像头拍摄的图像数据,或雷达波等各类探测式传感器获取的探测数据来识别实体输入设备的形态和位置。现有摄像头和探测式传感器共有的问题是当摄像头或探测式传感器和被识别的实体输入设备之间有遮挡物的时候,获取的图像或探测数据则会出现大幅残缺,甚至获取不到图像或数据,这会导致对实体输入设备形态和位置的识别出现不准确,甚至无法识别的情况,进一步导致无法将实体输入设备的模型在虚拟场景中完整的显示出来。
发明内容
(一)要解决的技术问题
本公开要解决的技术问题是解决现有的无法将实体输入设备的模型在虚拟场景中完整的显示出来的问题。
(二)技术方案
为了解决上述技术问题,本公开实施例提供了一种输入设备的虚拟方法,包括:
获取输入设备的数据;
基于输入设备的数据,确定输入设备对应的三维模型在虚拟现实系统中的目标信息;
获取输入设备上配置的惯性传感器的三维数据;
根据惯性传感器的三维数据,更新三维模型在虚拟现实系统中的目标信息。
基于更新后的目标信息,将三维模型映射到虚拟现实系统对应的虚拟现实场景中。
第二方面,还提供一种输入设备的虚拟装置,包括:
第一获取单元,用于获取输入设备的数据;
确定单元,用于基于输入设备的数据,确定输入设备对应的三维模型在虚拟现实系统中的目标信息;
第二获取单元,用于获取惯性传感器的数据;
更新单元,用于根据惯性传感器的三维数据,更新三维模型在虚拟现实系统中的目标信息。
映射单元,用于基于更新后的目标信息,将三维模型映射到虚拟现实系统对应的虚拟现实场景中。
第三方面,还提供一种电子设备,包括:
存储器;
处理器;以及
计算机程序;
其中,计算机程序存储在存储器中,并被配置为由处理器执行以实现如上述的输入设备的虚拟方法。
第四方面,还提供一种计算机可读存储介质,其上存储有计算机程序,计算机程序被处理器执行时实现如上述的输入设备的虚拟方法的步骤。
第五方面,还提供一种计算机程序产品,该计算机程序产品包括计算机程序或指令,该计算机程序或指令被处理器执行时实现如上所 述的输入设备的虚拟方法。
(三)有益效果
本公开实施例提供的上述技术方案与现有技术相比具有如下优点:
本公开实施例提供的该输入设备的虚拟方法,通过获取输入设备的数据,随后基于输入设备的数据,确定输入设备对应的三维模型在虚拟现实系统中的目标信息,同时实时获取输入设备上安装的惯性传感器所检测到的三维数据,再根据惯性传感器所检测到的三维数据,更新三维模型在虚拟现实系统中的目标信息,并在虚拟现实场景中更新后的目标信息处显示三维模型。本公开提供的一种输入设备的虚拟方法,能够将现实空间中的输入设备准确的映射到虚拟显示场景中,便于后续用户根据虚拟现实场景中的三维模型,高效的使用输入设备进行交互。
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,并不能限制本公开。
附图说明
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本公开的实施例,并与说明书一起用于解释本公开的原理。
为了更清楚地说明本公开实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,对于本领域普通技术人员而言,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1为本公开实施例提供的一种应用场景的示意图;
图2为本公开实施例提供的一种输入设备的虚拟方法的流程示意图;
图3a为本公开实施例提供的另一种应用场景的示意图;
图3b为本公开实施例提供的一种虚拟现实场景的示意图;
图3c为本公开实施例提供的另一种应用场景的示意图;
图4为本公开实施例提供的一种输入设备的虚拟方法的流程示意 图;
图5为本公开实施例提供的一种输入设备的虚拟装置的结构示意图;
图6为本公开实施例提供的一种电子设备的结构示意图。
具体实施方式
为使本公开实施例的目的、技术方案和优点更加清楚,下面将对本公开实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本公开的一部分实施例,而不是全部的实施例。基于本公开中的实施例,本领域普通技术人员在没有做出创造性劳动的前提下所获得的所有其他实施例,都属于本公开保护的范围。
目前,在虚拟现实系统中,用户和虚拟场景的交互方式通常是通过输入设备实现的,虚拟现实系统包括头戴式显示器和虚拟现实软件系统,虚拟现实软件系统具体可以包括操作系统、用于图像识别的软件算法、用于空间计算的软件算法和用于渲染虚拟场景的渲染软件。示例性的,参见图1,图1为本公开实施例提供的一种应用场景的示意图,图1中包括头戴式显示器110,头戴式显示器110可以是一体机,一体机是指头戴式显示器110上配置了虚拟现实软件系统,头戴式显示器110还可以连接服务器,服务器上配置虚拟现实软件系统。具体的,下述实施例以虚拟现实软件系统配置在头戴式显示器上为例,对本公开提供的输入设备的虚拟方法进行详细说明,头戴式显示设备和输入设备连接,输入设备具体可以是鼠标或键盘。
针对上述技术问题,本公开实施例提供了一种输入设备的虚拟方法,本发明通过获取固定在实体输入设备内部或外部的惯性传感器的磁力、陀螺仪和加速度等组成的三维数据,计算实体输入设备的姿态信息和位置信息,从而将与实体输入设备对应的三维模型显示在虚拟场景中,使用户可以通过三维模型使用实体输入设备,进行高效的输入操作。本公开提供的输入设备的虚拟方法不受遮挡影响,且能够有效解决现有方法中摄像头拍摄图像或探测式传感器被遮挡的问题,即 使完全遮住实体输入设备也能正常工作。具体的,通过下述一个或多个实施例对输入设备的虚拟方法进行详细说明。
图2为本公开实施例提供的一种输入设备的虚拟方法的流程示意图,应用于虚拟现实系统,具体包括如图2所示的如下步骤S210至S240:
可理解的,虚拟现实软件系统可以配置在头戴式显示器中,虚拟现实软件系统可以对接收到的由输入设备传输的输入信号或数据进行处理,并将处理结果返回至头戴式显示器中的显示屏幕,随后显示屏幕根据处理结果实时改变虚拟现实场景中输入设备的显示状态。
示例性的,参见图3a,图3a为本公开实施例提供的另一种应用场景的示意图,图3a中包括鼠标310、头戴式显示器320、用户手部330。鼠标310中包括左键311、滚轮312、右键313和配置的惯性传感器314,惯性传感器314为图3a中鼠标310上的黑色方框,惯性传感器314可以配置在鼠标310的表面,用户头部佩戴头戴式显示器320,手部330操作鼠标310,同时鼠标310和头戴式显示器320连接,图3b中的340是图3a中头戴式显示器320内构建的场景,可以称为虚拟现实场景340,用户通过观看虚拟现实场景340中显示的鼠标设备310对应的鼠标模型350来了解并操控鼠标设备310,让用户可以看到,在虚拟现实场景340中,用户手部330对应的三维模型360操作鼠标310对应的鼠标模型350,操作界面370为鼠标进行操作的界面,类似于终端的显示屏,在虚拟现实场景340中手部模型360操作鼠标模型350的操作情况和用户手部330实际使用鼠标310进行操作的情况在一定程度上可以同步,相当于用户双眼直接看到鼠标中的元件并进行后续操作,提高了用户的使用体验,也提高了交互速度。可理解的,下述实施例提供的输入设备的虚拟方法以图3a所示的应用场景为例进行说明,也就是以输入设备是鼠标、三维模型为鼠标模型为例对本公开提供的输入设备的虚拟方法进行详细说明。
示例性的,参见图3c,图3c为本公开实施例提供的另一种应用场 景的示意图,图3c中包括键盘380、头戴式显示器320和用户手部330,键盘380的应用场景同图3a中鼠标310,在此不作赘述。
S210、获取输入设备的数据。
可理解的,虚拟现实软件系统实时获取输入设备的数据,其中,输入设备的数据具体包括输入设备的配置信息、输入信号和输入设备的图像等,其中,配置信息包括型号信息,型号信息是指输入设备的型号。
可选的,在基于输入设备的数据,确定输入设备对应的三维模型在虚拟现实系统中的目标信息之前,还可以获取输入设备的型号信息;根据型号信息,确定输入设备对应的三维模型。
可理解的是,首次确输入设备对应的三维模型后,在用户没有更换输入设备的情况下,后续只需要获取输入设备的输入信号和输入设备的图像即可,以便于快速、准确的更新三维模型在虚拟现实场景中的显示状态。
S220、基于输入设备的数据,确定输入设备对应的三维模型在虚拟现实系统中的目标信息。
可理解的,在上述S210的基础上,根据鼠标设备的配置信息确定鼠标设备对应的鼠标模型后,虚拟现实软件系统可以基于鼠标设备的输入信号或鼠标设备的图像,确定鼠标模型在虚拟现实系统中的目标信息,其中目标信息包括位置信息和姿态信息。
示例性的,图3a中显示的头戴式显示器320会配置多个摄像头,具体的可以配置3至4个摄像头,来实时拍摄用户头部周围的环境信息,并确定拍摄到的环境信息和头部佩戴的头戴式显示器的位置关系,构建空间,该空间可以称为目标空间,鼠标和用户手部就在确定的目标空间内。可理解的是,虚拟现实场景中显示的场景是目标空间内的场景。目标信息也就是目标空间内的位置信息和姿态信息。
可选的,上述S220中基于输入设备的数据,确定输入设备对应的三维模型在虚拟现实系统中的目标信息,具体包括:基于输入设备的 输入信号,确定输入设备对应的三维模型在虚拟现实系统中的目标信息。
可理解的,虚拟现实软件系统可以根据获取的鼠标设备的输入信号,输入信号可以是按下鼠标设备上的按键或滚动滚轮产生的,确定鼠标模型在虚拟现实系统中的目标信息,以便于在虚拟现实场景中、该目标信息处显示鼠标模型。此时,虚拟现实场景显示的鼠标模型的姿态和现实空间中鼠标设备的姿态相同。
可选的,上述S220中基于输入设备的数据,确定输入设备对应的三维模型在虚拟现实系统中的目标信息,还可以包括:基于输入设备的图像,确定输入设备对应的三维模型在虚拟现实系统中的目标信息。
可理解的,虚拟现实软件系统还可以根据获取的鼠标设备的图像,确定鼠标模型在虚拟现实系统中的目标信息,以便于在虚拟现实场景中的该目标信息处显示鼠标模型。此时,虚拟现实场景显示的鼠标模型的姿态和现实空间中鼠标设备的姿态相同。鼠标设备的图像可以是头戴式显示器320上安装的摄像头实时拍摄生成的,其中摄像头可以是红外摄像头、彩色摄像头或者灰度摄像头。具体的,可以由图3a中头戴式显示器320上安装的摄像头拍摄包括鼠标310的图像,并将图像传输至头戴式显示器中的虚拟现实软件系统进行处理。
可理解的是,可以通过上述识别鼠标设备的输入信号和/或鼠标设备的图像中的按键的两种方式,确定鼠标设备对应的鼠标模型在虚拟现实系统中的目标信息,可以选择上述两种方式的任一种方式,或是同时选择两种方式,来确定鼠标模型在虚拟现实系统中的目标信息,能够有效避免出现无法拍摄鼠标设备的完整图像,或无法正常接收鼠标设备的输入信号时,还能继续进行交互操作,提高可使用性。上述两种方式确定的鼠标模型在虚拟现实系统中的目标信息可以看作下述鼠标设备对应的初始目标信息,初始目标信息也可以称为初始位置。
可选的,确定三维模型在虚拟现实系统中的目标信息之后,在虚拟现实系统构建的虚拟现实场景中映射三维模型。
可理解的,确定鼠标模型在虚拟现实系统中的目标信息后,可以在虚拟现实场景中、目标信息处显示鼠标模型,也就是在确定的初始目标信息处显示鼠标模型。
S230、获取输入设备上配置的惯性传感器的三维数据。
可理解的,鼠标设备上预先配置有惯性传感器,惯性传感器会实时采集关于鼠标设备的三维数据;其中,惯性传感器也称为惯性测量单元(Inertial Measurement Unit、IMU),是测量物体三轴姿态角及加速度的装置。
可理解的,惯性传感器的数据还可以理解为包括三轴陀螺仪、三轴加速度计和三轴磁力计等3组数据,每组数据中包括XYZ三个方向上的数据,也就是包括9个数据,三轴陀螺仪用来测量鼠标设备在三轴的角速度,三轴加速度计用来测量鼠标设备在三轴的加速度,三轴磁力计用来提供鼠标设备在三轴的朝向,上述9个数据组成定位信息,根据定位信息和初始目标信息,能够准确的确定鼠标模型在虚拟现实系统中的目标信息。
可选的,输入设备上配置的惯性传感器至少包括如下一种情况:惯性传感器配置在输入设备的表面;惯性传感器配置在输入设备的内部。
可理解的,惯性传感器可以配置在鼠标设备的表面,例如图3a所示场景,惯性传感器配置在普通鼠标设备的表面,例如右上角,此时的惯性传感器可以理解为不受鼠标控制的独立器件,带有电源模块等,可以直接安装在鼠标设备上。惯性传感器还可以配置在鼠标设备的内部,例如配置在鼠标设备内部电路中,该种情况下,可以理解为带有惯性传感器的鼠标设备。
S240、根据惯性传感器的三维数据,更新三维模型在虚拟现实系统中的目标信息。
可理解的,在上述S230和S220的基础上,根据实时获取的惯性传感器的三维数据,重新确定鼠标模型在虚拟现实系统中的目标信息, 并在虚拟现实场景中、重新确定的目标信息处显示鼠标模型。确定鼠标模型在虚拟现实系统中的初始目标信息后,现实空间中的鼠标设备可能会发生移动,此时,就可以根据惯性传感器实时获取的关于鼠标设备的定位信息,重新确定鼠标模型在虚拟现实系统中的目标信息,该目标信息是相对于初始目标信息确定的。
S250、基于更新后的目标信息,将三维模型映射到虚拟现实系统对应的虚拟现实场景中。
可理解的,在上述S240的基础上,更新鼠标模型在目标空间中的目标信息后,在虚拟现实场景中、在重新确定的目标信息处显示鼠标模型,其中虚拟现实场景显示的是目标空间内的场景。
本公开实施例提供的一种输入设备的虚拟方法,通过获取输入设备的数据,随后基于输入设备的数据,确定输入设备对应的三维模型在虚拟现实系统中的目标信息,同时实时获取输入设备上安装的惯性传感器所检测到的三维数据,再根据惯性传感器所检测到的三维数据,更新三维模型在虚拟现实系统中的目标信息,并在虚拟现实场景中、更新后的目标信息处显示三维模型。本公开的输入设备的虚拟方法,能够将现实空间中的输入设备准确的映射到虚拟现实场景中,便于后续用户根据虚拟现实场景中的三维模型,高效的使用输入设备进行交互。
在上述实施例的基础上,图4为本公开实施例提供的一种输入设备的虚拟方法的流程示意图,可选的,目标信息包括空间位置信息,空间位置信息是指在输入设备在目标空间内的位置信息;随后,根据惯性传感器的三维数据,更新三维模型在虚拟现实系统中的目标信息,也就是更新三维模型在目标空间中的空间位置信息,具体包括如图4所示的步骤S410至S430:
S410、将三维模型在虚拟现实系统中的空间位置信息作为初始空间位置。
可理解的,惯性传感器是实时获取输入设备从某时刻开始相对于 某个初始位置的运动轨迹和姿态的,也就是惯性传感器采集到的数据需要给定初始位置,以明确后续采集到的轨迹和姿态具体的起点或标准。例如,若没有给定初始位置,惯性传感器也会实时采集鼠标设备的数据,但是采集到的数据没有参照物,可能只是包括向右平移等轨迹和姿态信息,但是无法准确的确定是从何处向右平移的以及平移后的具体位置,因此需要确定一个初始空间位置,来准确的确定鼠标设备移动后的具体位置,初始空间位置是在上述构建的目标空间内的,该具体位置也是在同一目标空间内。
S420、根据惯性传感器的三维磁力数据、三维加速度数据和三维陀螺仪数据,计算输入设备在空间坐标系三个方向的位置移动相对量。
可理解的,根据惯性传感器采集的关于鼠标设备的三维数据,三维数据包括三维磁力数据、三维加速度数据和三维陀螺仪数据,计算输入设备在目标空间的空间坐标系下三个方向的位置移动相对量,位置移动相对量也就是输入设备在目标空间内XYZ三个方向上的移动距离。其中,惯性传感器的数据也可以理解为是基于初始空间位置的距离变化量。
S430、根据初始空间位置和输入设备在空间坐标系三个方向的位置移动相对量,更新三维模型在虚拟现实系统中的空间位置信息。
可理解的,在上述S410和S420的基础上,根据初始空间位置和鼠标设备在空间坐标系三个方向的位置移动相对量,更新鼠标模型在虚拟现实系统中的目标信息,例如,初始位置中空间三维坐标为(1,2,3),惯性传感器测量到鼠标设备沿着X轴移动1个单位,在鼠标姿态没有改变的情况下,鼠标模型的空间三维坐标更新为(2,2,3),此时的空间三维坐标(位置信息)和没有改变的姿态信息就是更新后的鼠标模型在虚拟现实系统中的目标信息。
可选的,方法还包括:更新初始空间位置;根据更新后的初始空间位置校正计算误差。
可理解的,通过惯性传感器获取的数据和初始空间位置计算鼠标 模型更新后的目标信息的时候,通常会累积计算误差。可以通过重新确定初始空间位置来校正计算误差,更新初始空间位置的方法如上所述,具体可以通过图像识别方法和/或按下按键方法获得初始空间位置,在此不作赘述。例如,确定初始空间位置A后,后续5次确定鼠标设备在虚拟现实系统中的目标信息,可以基于初始目标信息A和惯性传感器的数据,超过5次之后,重新确定初始空间位置B,并基于初始空间位置B纠正基于初始空间位置A计算而产生的误差,也就是周期性的根据初始空间位置校正计算误差。
可选的,目标信息还包括姿态信息;根据惯性传感器的三维数据,更新三维模型在虚拟现实系统中的目标信息,包括:根据惯性传感器的三维磁力数据、三维加速度数据和三维陀螺仪数据以及惯性传感器在输入设备上的相对空间位置关系,更新三维模型在虚拟现实系统中的姿态信息。
可理解的,目标信息还包括姿态信息,根据三维数据确定输入设备在目标空间内的姿态信息的方法具体包括:根据惯性传感器的三维磁力数据、三维加速度数据和三维陀螺仪数据以及惯性传感器在输入设备上的相对空间位置关系,更新三维模型在虚拟现实系统中的姿态信息,惯性传感器在输入设备上的相对空间位置关系是指传感器在输入设备上的具体位置,例如图3a中惯性传感器在314配置在鼠标310表面的右上方,也就是建立输入设备上惯性传感器和目标空间的对应关系,从而计算输入设备对应的三维模型在目标空间内的姿态信息。可理解的是,计算三维模型的姿态信息的过程中,不需要输入设备的初始空间位置。
本公开实施例提供的一种输入设备的虚拟方法,确定三维模型在虚拟现实场景中的初始空间位置后,获取的惯性传感器的三维数据以该初始空间位置为基准,重新确定三维模型在虚拟现实系统中的目标信息,以便于根据现实空间中输入设备的状态实时、快速、准确的更新三维模型在虚拟现实场景中的显示状态,便于后续操作。
图5为本公开实施例提供的输入设备的虚拟装置的结构示意图。本公开实施例提供的输入设备的虚拟装置可以执行上述输入设备的虚拟方法实施例提供的处理流程,如图5所示,装置500包括:
第一获取单元510,用于获取输入设备的数据;
确定单元520,用于基于输入设备的数据,确定输入设备对应的三维模型在虚拟现实系统中的目标信息;
第二获取单元530,用于获取惯性传感器的三维数据;
更新单元540,用于根据惯性传感器的三维数据,更新三维模型在虚拟现实系统中的目标信息;
映射单元550,用于基于更新后的目标信息,将三维模型映射到虚拟现实系统对应的虚拟现实场景中。
可选的,装置500中目标信息包括姿态信息。
可选的,更新单元540中根据惯性传感器的三维数据,更新三维模型在虚拟现实系统中的目标信息,具体用于:
根据惯性传感器的三维磁力数据、三维加速度数据和三维陀螺仪数据以及惯性传感器在输入设备上的相对空间位置关系,更新所述三维模型在虚拟现实系统中的姿态信息。
可选的,装置500中目标信息还包括空间位置信息。
可选的,更新单元540中根据惯性传感器的三维数据,更新三维模型在虚拟现实系统中的目标信息,具体用于:
将三维模型在虚拟现实系统中的空间位置信息作为初始空间位置;
根据惯性传感器的三维磁力数据、三维加速度数据和三维陀螺仪数据,计算输入设备在空间坐标系三个方向的位置移动相对量;
根据初始空间位置和输入设备在空间坐标系三个方向的位置移动相对量,更新三维模型在虚拟现实系统中的空间位置信息。
可选的,装置500中输入设备上配置的惯性传感器至少包括如下一种情况:
惯性传感器配置在输入设备的表面;
惯性传感器配置在输入设备的内部。
可选的,装置500中还包括校正单元,用于更新初始空间位置;根据更新后的初始空间位置校正计算误差。
图5所示实施例的输入设备的虚拟装置可用于执行上述方法实施例的技术方案,其实现原理和技术效果类似,此处不再赘述。
图6为本公开实施例提供的一种电子设备的结构示意图。本公开实施例提供的电子设备可以执行上述实施例提供的处理流程,如图6所示,电子设备600包括:处理器610、通讯接口620和存储器630;其中,计算机程序存储在存储器630中,并被配置为由处理器610执行如上述的输入设备的虚拟方法。
另外,本公开实施例还提供一种计算机可读存储介质,其上存储有计算机程序,计算机程序被处理器执行以实现上述实施例的输入设备的虚拟方法。
此外,本公开实施例还提供了一种计算机程序产品,该计算机程序产品包括计算机程序或指令,该计算机程序或指令被处理器执行时实现如上的输入设备的虚拟方法。
需要说明的是,在本文中,诸如“第一”和“第二”等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。
以上所述仅是本公开的具体实施方式,使本领域技术人员能够理 解或实现本公开。对这些实施例的多种修改对本领域的技术人员来说将是显而易见的,本文中所定义的一般原理可以在不脱离本公开的精神或范围的情况下,在其它实施例中实现。因此,本公开将不会被限制于本文所示的这些实施例,而是要符合与本文所公开的原理和新颖特点相一致的最宽的范围。
工业实用性
本公开提供的输入设备的虚拟方法,可有效计算实体输入设备的姿态信息和位置信息,能够很好的将与实体输入设备对应的三维模型显示在虚拟场景中,获得更好的虚拟现实交互体验,具有很强的工业实用性。

Claims (10)

  1. 一种输入设备的虚拟方法,其特征在于,包括:
    获取输入设备的数据;
    基于所述输入设备的数据,确定所述输入设备对应的三维模型在虚拟现实系统中的目标信息;
    获取所述输入设备上配置的惯性传感器的三维数据;
    根据所述惯性传感器的三维数据,更新所述三维模型在所述虚拟现实系统中的目标信息;
    基于更新后的所述目标信息,将所述三维模型映射到所述虚拟现实系统对应的虚拟现实场景中。
  2. 根据权利要求1所述的方法,其特征在于,所述目标信息包括姿态信息;所述根据所述惯性传感器的三维数据,更新所述三维模型在所述虚拟现实系统中的目标信息,包括:
    根据所述惯性传感器的三维磁力数据、三维加速度数据和三维陀螺仪数据以及所述惯性传感器在所述输入设备上的相对空间位置关系,更新所述三维模型在所述虚拟现实系统中的姿态信息。
  3. 根据权利要求1所述的方法,其特征在于,所述目标信息包括空间位置信息;所述根据所述惯性传感器的三维数据,更新所述三维模型在所述虚拟现实系统中的目标信息,包括:
    将所述三维模型在所述虚拟现实系统中的空间位置信息作为初始空间位置;
    根据所述惯性传感器的三维磁力数据、三维加速度数据和三维陀螺仪数据,计算所述输入设备在空间坐标系三个方向的位置移动相对量;
    根据所述初始空间位置和所述输入设备在空间坐标系三个方向的位置移动相对量,更新所述三维模型在所述虚拟现实系统中的空间位置信息。
  4. 根据权利要求3所述的方法,其特征在于,所述方法还包括:
    更新所述初始空间位置;
    根据更新后的初始空间位置校正计算误差。
  5. 根据所述权利要求1所述的方法,其特征在于,所述输入设备上配置的惯性传感器至少包括如下一种情况:
    所述惯性传感器配置在所述输入设备的表面;
    所述惯性传感器配置在所述输入设备的内部。
  6. 一种输入设备的虚拟装置,其特征在于,包括:
    第一获取单元,用于获取输入设备的数据;
    确定单元,用于基于所述输入设备的数据,确定所述输入设备对应的三维模型在虚拟现实系统中的目标信息;
    第二获取单元,用于获取惯性传感器的三维数据;
    更新单元,用于根据所述惯性传感器的三维数据,更新所述三维模型在所述虚拟现实系统中的目标信息;
    映射单元,用于基于更新后的所述目标信息,将所述三维模型映射到所述虚拟现实系统对应的虚拟现实场景中。
  7. 根据权利要求6所述的装置,其特征在于,所述目标信息包括姿态信息;所述更新单元中根据所述惯性传感器的三维数据,更新所述三维模型在所述虚拟现实系统中的目标信息,具体用于:
    根据所述惯性传感器的三维磁力数据、三维加速度数据和三维陀螺仪数据以及所述惯性传感器在所述输入设备上的相对空间位置关系,更新所述三维模型在所述虚拟现实系统中的姿态信息。
  8. 根据权利要求6所述的装置,其特征在于,所述目标信息包括空间位置信息;所述更新单元中根据所述惯性传感器的三维数据,更新所述三维模型在所述虚拟现实系统中的目标信息,具体用于:
    将所述三维模型在所述虚拟现实系统中的空间位置信息作为初始空间位置;
    根据所述惯性传感器的三维磁力数据、三维加速度数据和三维陀 螺仪数据,计算所述输入设备在空间坐标系三个方向的位置移动相对量;
    根据所述初始空间位置和所述输入设备在空间坐标系三个方向的位置移动相对量,更新所述三维模型在所述虚拟现实系统中的空间位置信息。
  9. 一种电子设备,其特征在于,包括:
    存储器;
    处理器;以及
    计算机程序;
    其中,所述计算机程序存储在所述存储器中,并被配置为由所述处理器执行以实现如权利要求1至5中任一所述的输入设备的虚拟方法。
  10. 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现如权利要求1至5中任一所述的输入设备的虚拟方法的步骤。
PCT/CN2023/078387 2022-02-28 2023-02-27 一种输入设备的虚拟方法、装置、设备和存储介质 WO2023160694A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210185778.9A CN114706489B (zh) 2022-02-28 2022-02-28 一种输入设备的虚拟方法、装置、设备和存储介质
CN202210185778.9 2022-02-28

Publications (1)

Publication Number Publication Date
WO2023160694A1 true WO2023160694A1 (zh) 2023-08-31

Family

ID=82167533

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/078387 WO2023160694A1 (zh) 2022-02-28 2023-02-27 一种输入设备的虚拟方法、装置、设备和存储介质

Country Status (3)

Country Link
US (1) US20230316677A1 (zh)
CN (1) CN114706489B (zh)
WO (1) WO2023160694A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114706490A (zh) 2022-02-28 2022-07-05 北京所思信息科技有限责任公司 一种鼠标的模型映射方法、装置、设备和存储介质
CN114706489B (zh) * 2022-02-28 2023-04-25 北京所思信息科技有限责任公司 一种输入设备的虚拟方法、装置、设备和存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07200162A (ja) * 1993-12-29 1995-08-04 Namco Ltd 仮想現実体験装置およびこれを用いたゲーム装置
CN105912110A (zh) * 2016-04-06 2016-08-31 北京锤子数码科技有限公司 一种在虚拟现实空间中进行目标选择的方法、装置及系统
CN109710056A (zh) * 2018-11-13 2019-05-03 宁波视睿迪光电有限公司 虚拟现实交互装置的显示方法及装置
CN109840947A (zh) * 2017-11-28 2019-06-04 广州腾讯科技有限公司 增强现实场景的实现方法、装置、设备及存储介质
CN114706489A (zh) * 2022-02-28 2022-07-05 北京所思信息科技有限责任公司 一种输入设备的虚拟方法、装置、设备和存储介质

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10055888B2 (en) * 2015-04-28 2018-08-21 Microsoft Technology Licensing, Llc Producing and consuming metadata within multi-dimensional data
US9298283B1 (en) * 2015-09-10 2016-03-29 Connectivity Labs Inc. Sedentary virtual reality method and systems
US20170154468A1 (en) * 2015-12-01 2017-06-01 Le Holdings (Beijing) Co., Ltd. Method and electronic apparatus for constructing virtual reality scene model
CN206096621U (zh) * 2016-07-30 2017-04-12 广州数娱信息科技有限公司 一种增强型虚拟现实感知设备
CN106980368B (zh) * 2017-02-28 2024-05-28 深圳市未来感知科技有限公司 一种基于视觉计算及惯性测量单元的虚拟现实交互设备
CN107357434A (zh) * 2017-07-19 2017-11-17 广州大西洲科技有限公司 一种虚拟现实环境下的信息输入设备、系统及方法
CN111862333B (zh) * 2019-04-28 2024-05-28 广东虚拟现实科技有限公司 基于增强现实的内容处理方法、装置、终端设备及存储介质
CN110442245A (zh) * 2019-07-26 2019-11-12 广东虚拟现实科技有限公司 基于物理键盘的显示方法、装置、终端设备及存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07200162A (ja) * 1993-12-29 1995-08-04 Namco Ltd 仮想現実体験装置およびこれを用いたゲーム装置
CN105912110A (zh) * 2016-04-06 2016-08-31 北京锤子数码科技有限公司 一种在虚拟现实空间中进行目标选择的方法、装置及系统
CN109840947A (zh) * 2017-11-28 2019-06-04 广州腾讯科技有限公司 增强现实场景的实现方法、装置、设备及存储介质
CN109710056A (zh) * 2018-11-13 2019-05-03 宁波视睿迪光电有限公司 虚拟现实交互装置的显示方法及装置
CN114706489A (zh) * 2022-02-28 2022-07-05 北京所思信息科技有限责任公司 一种输入设备的虚拟方法、装置、设备和存储介质

Also Published As

Publication number Publication date
CN114706489B (zh) 2023-04-25
CN114706489A (zh) 2022-07-05
US20230316677A1 (en) 2023-10-05

Similar Documents

Publication Publication Date Title
WO2023160694A1 (zh) 一种输入设备的虚拟方法、装置、设备和存储介质
EP1611503B1 (en) Auto-aligning touch system and method
EP2354893B1 (en) Reducing inertial-based motion estimation drift of a game input controller with an image-based motion estimation
JP5920352B2 (ja) 情報処理装置、情報処理方法及びプログラム
TWI649675B (zh) Display device
CN106990836B (zh) 一种头戴式人体学输入设备空间位置及姿态测量方法
TW201911133A (zh) 用於多個自由度之控制器追蹤
WO2023160697A1 (zh) 一种鼠标的模型映射方法、装置、设备和存储介质
WO2020107931A1 (zh) 位姿信息确定方法和装置、视觉点云构建方法和装置
US10168773B2 (en) Position locating method and apparatus
WO2014048475A1 (en) Method of determining a position and orientation of a device associated with a capturing device for capturing at least one image
WO2021027676A1 (zh) 视觉定位方法、终端和服务器
WO2021043214A1 (zh) 一种标定方法、装置及飞行器
WO2017147748A1 (zh) 一种可穿戴式系统的手势控制方法以及可穿戴式系统
CN113129451B (zh) 基于双目视觉定位的全息三维影像空间定量投影方法
JP5437023B2 (ja) 操作入力装置
CN114167997B (zh) 一种模型显示方法、装置、设备和存储介质
JP4926598B2 (ja) 情報処理方法、情報処理装置
CN113867562B (zh) 触摸屏报点的校正方法、装置和电子设备
CN115578432A (zh) 图像处理方法、装置、电子设备及存储介质
US11294510B2 (en) Method, system and non-transitory computer-readable recording medium for supporting object control by using a 2D camera
US11158119B2 (en) Systems and methods for reconstructing a three-dimensional object
CN115686233A (zh) 一种主动笔与显示设备的交互方法、装置及交互系统
CN104580967A (zh) 一种基于便携式投影仪的地图投影方法和用于投影的装置
TWI779332B (zh) 擴增實境系統與其錨定顯示虛擬物件的方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23759315

Country of ref document: EP

Kind code of ref document: A1