WO2019166005A1 - 智能终端及其感控方法、具有存储功能的装置 - Google Patents

智能终端及其感控方法、具有存储功能的装置 Download PDF

Info

Publication number
WO2019166005A1
WO2019166005A1 PCT/CN2019/076648 CN2019076648W WO2019166005A1 WO 2019166005 A1 WO2019166005 A1 WO 2019166005A1 CN 2019076648 W CN2019076648 W CN 2019076648W WO 2019166005 A1 WO2019166005 A1 WO 2019166005A1
Authority
WO
WIPO (PCT)
Prior art keywords
smart terminal
virtual reality
user
reality scene
sight
Prior art date
Application number
PCT/CN2019/076648
Other languages
English (en)
French (fr)
Inventor
王凯迪
Original Assignee
惠州Tcl移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 惠州Tcl移动通信有限公司 filed Critical 惠州Tcl移动通信有限公司
Priority to US16/976,773 priority Critical patent/US20210041942A1/en
Priority to EP19761491.0A priority patent/EP3761154A4/en
Publication of WO2019166005A1 publication Critical patent/WO2019166005A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04815Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/0093Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00 with means for monitoring data relating to the user, e.g. head-tracking, eye-tracking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/012Head tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0179Display position adjusting means not related to the information to be displayed
    • G02B2027/0187Display position adjusting means not related to the information to be displayed slaved to motion of at least a part of the body of the user, e.g. head, eye
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/012Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/048Indexing scheme relating to G06F3/048
    • G06F2203/048023D-info-object: information is displayed on the internal or external surface of a three dimensional manipulable object, e.g. on the faces of a cube that can be rotated by the user

Definitions

  • the present application relates to the field of virtual reality, and in particular, to an intelligent terminal, a sensing method thereof, and a device having a storage function.
  • Virtual reality (VR) technology is a computer simulation system that can create and experience virtual worlds. It uses computer simulation to generate a three-dimensional virtual world, providing users with simulations of visual, auditory, tactile and other senses, allowing users to be physically present. Virtual reality usually requires a head-mounted display. Of course, smart terminals can also implement virtual reality experiences through peripherals, such as Google. Cardboard and Samsung Gear VR and more.
  • the embodiment of the present invention provides an intelligent terminal, a sensing method thereof, and a device with a storage function.
  • the sensing method can operate the display content in the virtual reality scene more intuitively and flexibly, and improve the user in the virtual reality scenario. Convenience of operation.
  • the embodiment of the present application provides a VR-based sensing method, where the sensing method includes: displaying a virtual reality scene by using a smart terminal worn on a user's head; and acquiring pose state data of the smart terminal; Determining, by the pose data, an intersection of the line of sight of the user and the virtual reality scene; performing a corresponding operation on the display content at the intersection location in the virtual reality scene.
  • the step of determining, according to the pose data, the intersection of the line of sight of the user and the virtual reality scene includes:
  • the step of determining, according to the pose data, the intersection of the line of sight of the user and the virtual reality scene includes:
  • Determining an intersection of the line of sight of the user with a virtual reality scene in the spatial model is determined according to the pose description.
  • the step of performing a corresponding operation on the display content at the intersection location in the virtual reality scenario includes:
  • a touch operation signal is sent to the graphic control.
  • the method further includes:
  • the smart terminal is disposed in a VR glasses, and the VR glasses are provided with at least one physical button.
  • the physical button is disposed on a touch screen of the smart terminal, and the physical button of the VR glasses is pressed.
  • the physical button presses the touch screen of the smart terminal;
  • the step of determining whether the trigger signal is received further includes:
  • the pose data includes at least one or a combination of location data and gesture data.
  • the step of acquiring the pose data of the smart terminal further includes:
  • an embodiment of the present application provides an intelligent terminal, where the smart terminal includes a processor and a memory coupled to the processor; the memory stores program instructions executed by the processor and the processing The intermediate data generated when the program instruction is executed by the processor; the processor executing the program instruction can implement the following steps:
  • a corresponding operation is performed for the display content at the intersection position in the virtual reality scene.
  • the step of determining, according to the pose data, the intersection of the line of sight of the user and the virtual reality scene includes:
  • Determining an intersection of the line of sight of the user with a virtual reality scene in the spatial model is determined according to the pose description.
  • the step of performing a corresponding operation on the display content at the intersection location in the virtual reality scenario includes:
  • a touch operation signal is sent to the graphic control.
  • the method further includes:
  • the smart terminal is disposed in a VR glasses, and the VR glasses are provided with at least one physical button.
  • the physical button is disposed on a touch screen of the smart terminal, and the physical button of the VR glasses is pressed.
  • the physical button presses the touch screen of the smart terminal;
  • the step of determining whether the trigger signal is received further includes:
  • an embodiment of the present application provides a device having a storage function, where program instructions are stored, and the program instructions can be executed to implement the following steps:
  • a corresponding operation is performed for the display content at the intersection position in the virtual reality scene.
  • the step of determining, according to the pose data, the intersection of the line of sight of the user and the virtual reality scene includes:
  • the step of determining, according to the pose data, the intersection of the line of sight of the user and the virtual reality scene includes:
  • Determining an intersection of the line of sight of the user with a virtual reality scene in the spatial model is determined according to the pose description.
  • the step of performing a corresponding operation on the display content at the intersection location in the virtual reality scenario includes:
  • a touch operation signal is sent to the graphic control.
  • the method further includes:
  • the smart terminal is disposed in a VR glasses, and the VR glasses are provided with at least one physical button.
  • the physical button is disposed on a touch screen of the smart terminal, and the physical button of the VR glasses is pressed.
  • the physical button presses the touch screen of the smart terminal;
  • the step of determining whether the trigger signal is received further includes:
  • the step of acquiring the pose data of the smart terminal further includes:
  • the method for controlling the virtual reality based on the present application includes: displaying a virtual reality scene by using a smart terminal worn on the user's head; acquiring pose data of the smart terminal; determining an intersection of the user's line of sight and the virtual reality scene according to the pose data; The display content at the intersection position in the real scene performs a corresponding operation.
  • the sensing method of the present application determines the direction of the user's line of sight through the pose data of the smart terminal, thereby calculating the intersection position of the user's line of sight direction and the virtual reality scene, and performing corresponding operations on the display content at the intersection position. Combined with the motion characteristics of the human head, the display content in the virtual reality scene is operated more intuitively and flexibly, and the convenience of user operation in the virtual reality scene is improved.
  • FIG. 1 is a schematic flowchart of an embodiment of a virtual reality-based sensing control method provided by the present application
  • FIG. 2 is a schematic flowchart of a specific implementation manner of a virtual reality-based sensing control method provided by the present application
  • FIG. 3 is a schematic structural diagram of an implementation manner of a smart terminal provided by the present application.
  • FIG. 4 is a schematic structural diagram of an embodiment of a device having a storage function provided by the present application.
  • the present application provides an intelligent terminal, a sensing method thereof, and a device having a storage function. To make the purpose, technical solution, and technical effects of the present application clearer and clearer, the following further details the present application, and the specific descriptions herein are understood.
  • the implementation regulations are only used to explain the application and are not intended to limit the application.
  • FIG. 1 is a schematic flowchart of an implementation manner of a virtual reality based sensing method according to the present application.
  • the sensing method includes:
  • Step 101 Display a virtual reality scene by using a smart terminal worn on the user's head.
  • the smart terminal is disposed in a VR glasses, and the user can experience the virtual reality scene after wearing the VR glasses.
  • the smart terminal can be a smart phone.
  • the smart terminal displays the virtual reality scene using the smart terminal worn on the user's head.
  • Step 102 Acquire pose state data of the smart terminal.
  • the user's head moves in accordance with the direction of the user's line of sight, which in turn causes the smart terminal worn on the user's head to move synchronously, for example, the user's head rotates.
  • the smart terminal also rotates or pans synchronously when panning.
  • the smart terminal acquires the pose data of the smart terminal.
  • the pose data is at least one or a combination of position data and posture data.
  • the smart terminal acquires the pose data of the smart terminal through the position sensor and/or the motion sensor.
  • the motion sensor comprises a gyroscope, an accelerometer and a gravity sensor, and is mainly used for monitoring the movement of the intelligent terminal, for example, tilting and rocking;
  • the position sensor comprises a geomagnetic field sensor, which is mainly used for monitoring the position of the intelligent terminal, that is, the intelligent terminal is opposite to the smart terminal.
  • the location of the world coordinate system is mainly used for monitoring the position of the intelligent terminal.
  • the virtual reality scene displayed by the smart terminal changes accordingly to enhance the virtual reality experience of the user.
  • the virtual reality scene is adjusted according to the pose data of the smart terminal. For example, when the user's perspective moves to the right, the virtual reality scene moves to the left correspondingly; when the user's perspective moves to the left, the virtual reality scene moves to the right accordingly.
  • Step 103 Determine an intersection of the user's line of sight and the virtual reality scene according to the pose data.
  • the smart terminal determines a conversion relationship between the user's line of sight and the preset reference direction of the smart terminal when the smart terminal is worn on the user's head, and determines the preset of the smart terminal according to the pose data of the smart terminal.
  • the pose description of the reference direction determines the pose description of the user's line of sight according to the pose description and the transformation relationship of the preset reference direction, and determines the intersection of the user's line of sight and the virtual reality scene according to the pose description of the user's line of sight.
  • the preset reference direction of the smart terminal is a predefined direction, and can be designed according to actual conditions.
  • the direction in which the smart terminal displays the screen is the preset reference direction.
  • the direction perpendicular to the direction in which the smart terminal displays the screen may be the preset reference direction.
  • the smart terminal determines the pose description of the preset reference direction of the smart terminal according to the pose data of the smart terminal, wherein the pose description of the preset reference direction represents the preset reference direction of the smart terminal.
  • the amount of translation or rotation For example, when the preset reference direction is the direction in which the smart terminal displays the screen, the pose of the preset reference direction describes a translation amount or a rotation amount that represents the direction in which the smart terminal displays the screen.
  • the smart terminal performs time integration on the detection result of the acceleration sensor or the angular velocity sensor to obtain a pose description of the preset reference direction of the smart terminal.
  • the smart terminal can determine the pose description of the user's line of sight, that is, the direction of the user's line of sight, according to the pose description of the preset reference direction and the change relationship between the preset reference direction of the smart terminal and the user's line of sight.
  • the smart terminal maps the virtual reality scene to the space model, wherein the space model is established in the world coordinate system; and calculates the position of the user's line of sight in the world coordinate system according to the pose data of the smart terminal. Position description; determining the intersection of the user's line of sight and the virtual reality scene in the space model based on the pose description.
  • the smart terminal initializes its sensor. After the smart terminal receives the signal for displaying the content refresh, the smart terminal starts to draw the display interface, and reads the initial data of the sensor, and maps the virtual reality scene to the space model, where the space model It was established in the world coordinate system. And adjusting the display content of the virtual reality scene based on the data of the sensor, and displaying the display content in 3D form.
  • the smart terminal calculates the pose description of the user's line of sight in the world coordinate system according to the rotation matrix and the pose data of the smart terminal.
  • the pose description of the user's line of sight in the world coordinate system reflects the direction of the user's line of sight in the perspective of the earth or the real environment.
  • the smart terminal is an Android system, and the smart terminal can determine the posture description of the user's line of sight in the world coordinate system according to the SensorManager.getRotationMatrix function.
  • Step 104 Perform a corresponding operation on the display content at the intersection position in the virtual reality scene.
  • the smart terminal performs a corresponding operation on the display content at the intersection position in the virtual reality scene.
  • the smart terminal determines whether there is an intersection between the user's line of sight and the content displayed in the virtual reality scene according to the pose description of the user's line of sight, and determines whether the intersection point coincides with the graphic control of the virtual reality scene; if it coincides with the graphic control, Further determining whether a trigger signal is received; if a trigger signal is received, a touch operation signal is sent to the graphic control. If no trigger signal is received, a floating operation signal is sent to the graphic control.
  • the graphical control can be an icon corresponding to an application.
  • the smart terminal is disposed in a VR glasses, the VR glasses are provided with at least one physical button, and the physical button is disposed on the touch screen of the smart terminal.
  • the physical button of the VR glasses When the physical button of the VR glasses is pressed, the physics The button presses the touch screen of the smart terminal; the smart terminal determines whether the touch screen of the smart terminal detects the trigger signal generated by the pressing of the physical button, and if the trigger signal is received, sends a touch operation signal to the graphic control. If no trigger signal is received, a floating operation signal is sent to the graphic control.
  • the user mainly operates the application by touching the screen of the smart terminal.
  • the intelligent terminal has a complete mechanism to ensure that the operation event is delivered to the corresponding component.
  • Each component can get an action event on the screen by registering a callback function and then perform the corresponding action.
  • the graphic control to be operated is selected by the intersection of the user's line of sight and the virtual reality scene, and the corresponding operation is determined according to the state of the physical button of the VR glasses.
  • the smart terminal is an Android system
  • the operation event is encapsulated in a MotionEvent function, which describes the action code of the screen operation and a series of coordinate values.
  • the action code indicates a state change caused by a position corresponding to the display screen being pressed or bounced
  • the coordinate value describes the position change and other motion information.
  • the smart terminal determines a coordinate value of the graphic control that coincides with the intersection of the user's line of sight and the virtual reality scene, and then performs a touch operation on the graphic control, such as opening a graphic control or closing the graphic control.
  • the smart terminal performs a floating operation if the position corresponding to the display screen is not pressed or bounced.
  • the intelligent terminal determines the coordinate value of the graphic control that coincides with the intersection of the user's line of sight and the virtual reality scene, and then performs a floating operation on the graphic control, that is, the graphic control is displayed as a Hover state.
  • FIG. 2 is a schematic flowchart of a specific implementation method of the virtual reality based sensing method according to the present application.
  • Step 201 Display a virtual reality scene by using a smart terminal worn on the user's head.
  • step 101 is the same as the step 101 in FIG. 1 .
  • step 101 and related text descriptions and details are not described herein again.
  • Step 202 Acquire pose state data of the smart terminal.
  • step 102 is the same as the step 102 in FIG. 1 .
  • step 102 refers to step 102 and related text descriptions, and details are not described herein again.
  • Step 203 Determine an intersection of the user's line of sight and the virtual reality scene according to the pose data, and determine whether the intersection point coincides with the graphic control.
  • the smart terminal determines a conversion relationship between the user's line of sight and the preset reference direction of the smart terminal when the smart terminal is worn on the user's head, and determines the preset of the smart terminal according to the pose data of the smart terminal.
  • the pose description of the reference direction determines the pose description of the user's line of sight according to the pose description and the transformation relationship of the preset reference direction, and determines the intersection of the user's line of sight and the virtual reality scene according to the pose description of the user's line of sight.
  • the smart terminal maps the virtual reality scene to the space model, wherein the space model is established in the world coordinate system; and calculates the position of the user's line of sight in the world coordinate system according to the pose data of the smart terminal. Position description; determining the intersection of the user's line of sight and the virtual reality scene in the space model based on the pose description.
  • step 202 is performed.
  • step 202 When there is an intersection between the user's line of sight and the virtual reality scene, it is determined whether the intersection point coincides with the graphic control of the virtual reality scene. If the intersection point does not coincide with any of the graphical controls of the virtual reality scene, step 202 is performed, and if the intersection point coincides with any of the graphical controls of the virtual reality scene, step 204 is performed.
  • Step 204 Determine whether the touch screen of the smart terminal detects a trigger signal generated by the pressing of the physical button.
  • the smart terminal is disposed in a VR glasses, the VR glasses are provided with at least one physical button, and the physical button is disposed on the touch screen of the smart terminal.
  • the physical button of the VR glasses is pressed, the physical button is pressed. Pressing the touch screen of the smart terminal; the smart terminal determines whether the touch screen of the smart terminal detects a trigger signal generated by the pressing of the physical button, and if the trigger signal is received, performing step 206; if the trigger signal is not received, Then step 205 is performed.
  • Step 205 Send a floating operation signal to the graphic control.
  • Step 206 Send a touch operation signal to the graphic control.
  • Steps 203 to 206 correspond to the steps 103 and 104 in FIG. 1 .
  • steps 203 to 206 correspond to the steps 103 and 104 in FIG. 1 .
  • steps 103 to 104 correspond to the steps 103 and 104 in FIG. 1 .
  • the method for controlling the virtual reality based on the present application includes: displaying a virtual reality scene by using a smart terminal worn on the user's head; acquiring pose data of the smart terminal; determining the user's line of sight and virtual reality according to the pose data. The intersection of the scene; performing corresponding operations on the display content at the intersection position in the virtual reality scene.
  • the sensing method of the present application determines the direction of the user's line of sight through the pose data of the smart terminal, thereby calculating the intersection position of the user's line of sight direction and the virtual reality scene, and performing corresponding operations on the display content at the intersection position.
  • the display content in the virtual reality scene is operated more intuitively and flexibly, and the convenience of user operation in the virtual reality scene is improved.
  • FIG. 3 is a schematic structural diagram of an implementation manner of a smart terminal according to the present application.
  • the smart terminal 30 includes a processor 31 and a memory 32 coupled to the processor 31.
  • the smart terminal is a smart phone.
  • the memory 32 stores program instructions executed by the processor 31 and intermediate data generated when the processor 31 executes the program instructions.
  • the processor 31 executes the program instructions to implement the virtual reality based sensing method of the present application.
  • the sensing method includes:
  • the smart terminal 30 is disposed in a VR glasses, and the user can experience the virtual reality scene after wearing the VR glasses.
  • the smart terminal 30 can be a smart phone.
  • the processor 31 displays a virtual reality scene using the smart terminal 30 worn on the user's head.
  • the user's head moves in accordance with the direction of the user's line of sight, which in turn causes the smart terminal 30 worn by the user's head to move synchronously, for example, the user's head.
  • the smart terminal 30 also rotates or translates synchronously as it rotates or translates.
  • the processor 31 acquires pose data of the smart terminal 30.
  • the pose data is at least one or a combination of position data and posture data.
  • the processor 31 acquires pose data of the smart terminal 30 through a position sensor and/or a motion sensor.
  • the motion sensor includes a gyroscope, an accelerometer, and a gravity sensor, and is mainly used for monitoring the motion of the smart terminal 30, for example, tilting and rocking;
  • the position sensor includes a geomagnetic field sensor, and is mainly used for monitoring the position of the smart terminal 30, that is, the smart terminal. 30 relative to the position of the world coordinate system.
  • the virtual reality scene displayed by the processor 31 is correspondingly changed to enhance the virtual reality experience of the user.
  • the processor 31 adjusts the virtual reality scene according to the pose data of the smart terminal 30. For example, when the user's perspective moves to the right, the virtual reality scene moves to the left correspondingly; when the user's perspective moves to the left, the virtual reality scene moves to the right accordingly.
  • the processor 31 determines a transformation relationship between the line of sight of the user and the preset reference direction of the smart terminal 30 when the smart terminal 30 is worn on the user's head, and determines the intelligence according to the pose data of the smart terminal 30.
  • a pose description of the preset reference direction of the terminal 30 determining a pose description of the user's line of sight according to the pose description and the transformation relationship of the preset reference direction, and determining the user's line of sight and the virtual according to the pose description of the user's line of sight The intersection of real scenes.
  • the preset reference direction of the smart terminal 30 is a predefined direction, and can be designed according to actual conditions.
  • the direction in which the smart terminal 30 displays the screen is the preset reference direction.
  • the direction perpendicular to the direction in which the smart terminal 30 displays the screen may be the preset reference direction.
  • the processor 31 determines the pose description of the preset reference direction of the smart terminal 30 according to the pose data of the smart terminal 30, wherein the pose description of the preset reference direction represents the pre-characterization of the smart terminal 30.
  • Set the amount of translation or rotation of the reference direction For example, when the preset reference direction is the direction in which the smart terminal 30 displays the screen, the pose of the preset reference direction describes a translation amount or a rotation amount that represents the direction in which the smart terminal 30 displays the screen.
  • the processor 31 performs time integration on the detection result of the acceleration sensor or the angular velocity sensor to obtain a pose description of the preset reference direction of the smart terminal 30.
  • the processor 31 can determine the pose description of the user's line of sight, that is, the direction of the user's line of sight, according to the pose description of the preset reference direction and the change relationship between the preset reference direction and the user's line of sight of the smart terminal 30.
  • the processor 31 maps the virtual reality scene into the spatial model, wherein the spatial model is established in the world coordinate system; and calculates the user's line of sight in the world coordinate system according to the pose data of the smart terminal 30.
  • the pose description determining the intersection of the user's line of sight and the virtual reality scene in the spatial model based on the pose description.
  • the processor 31 initializes the smart terminal 30 sensor. After the processor 31 receives the signal for displaying the content refresh, the processor 31 starts to draw the display interface, and reads the initial data of the sensor, and maps the virtual reality scene to the spatial model. Medium, where the spatial model is built in the world coordinate system. And adjusting the display content of the virtual reality scene based on the data of the sensor, and displaying the display content in 3D form.
  • the processor 31 calculates a pose description of the user's line of sight in the world coordinate system based on the rotation matrix and the pose data of the smart terminal 30.
  • the pose description of the user's line of sight in the world coordinate system reflects the direction of the user's line of sight in the perspective of the earth or the real environment.
  • the smart terminal 30 is an Android system, and the processor 31 can determine the posture description of the user's line of sight in the world coordinate system according to the SensorManager.getRotationMatrix function.
  • the processor 31 performs a corresponding operation on the display content at the intersection position in the virtual reality scene.
  • the processor 31 determines whether there is an intersection between the user's line of sight and the content displayed in the virtual reality scene according to the pose description of the user's line of sight, and determines whether the intersection point coincides with the graphic control of the virtual reality scene; if it coincides with the graphic control, Then, it is further determined whether a trigger signal is received; if a trigger signal is received, a touch operation signal is sent to the graphic control. If no trigger signal is received, a floating operation signal is sent to the graphic control.
  • the graphical control can be an icon corresponding to an application.
  • the smart terminal 30 is disposed in a VR glasses, the VR glasses are provided with at least one physical button, and the physical buttons are disposed on the touch screen of the smart terminal 30, when the physical buttons of the VR glasses are pressed.
  • the physical button presses the touch screen of the smart terminal 30; the processor 31 determines whether the touch screen of the smart terminal 30 detects the trigger signal generated by the pressing of the physical button, and if the trigger signal is received, sends the trigger signal to the graphic control. Touch operation signal. If no trigger signal is received, a floating operation signal is sent to the graphic control.
  • the method for controlling the virtual reality based on the present application includes: displaying a virtual reality scene by using a smart terminal worn on the user's head; acquiring pose data of the smart terminal; determining the user's line of sight and virtual reality according to the pose data. The intersection of the scene; performing corresponding operations on the display content at the intersection position in the virtual reality scene.
  • the sensing method of the present application determines the direction of the user's line of sight through the pose data of the smart terminal, thereby calculating the intersection position of the user's line of sight direction and the virtual reality scene, and performing corresponding operations on the display content at the intersection position.
  • the display content in the virtual reality scene is operated more intuitively and flexibly, and the convenience of user operation in the virtual reality scene is improved.
  • FIG. 4 is a schematic structural diagram of an embodiment of a device having a storage function according to the present application.
  • the device 40 having the storage function stores a program command 41 that can be executed to implement the virtual reality-based sensing method of the present application.
  • the method for controlling the virtual reality based on the present application includes: displaying a virtual reality scene by using a smart terminal worn on the user's head; acquiring pose data of the smart terminal; determining the user's line of sight and virtual reality according to the pose data. The intersection of the scene; performing corresponding operations on the display content at the intersection position in the virtual reality scene.
  • the sensing method of the present application determines the direction of the user's line of sight through the pose data of the smart terminal, thereby calculating the intersection position of the user's line of sight direction and the virtual reality scene, and performing corresponding operations on the display content at the intersection position.
  • the display content in the virtual reality scene is operated more intuitively and flexibly, and the convenience of user operation in the virtual reality scene is improved.
  • the disclosed methods and apparatus may be implemented in other manners.
  • the device implementations described above are merely illustrative.
  • the division of modules or units is only a logical function division.
  • multiple units or components may be combined or integrated. Go to another system, or some features can be ignored or not executed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, and may be in an electrical, mechanical or other form.
  • the units described as separate components may or may not be physically separate, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the above integrated unit can be implemented in the form of hardware or in the form of a software functional unit.
  • An integrated unit, if implemented in the form of a software functional unit and sold or used as a standalone product, can be stored in a computer readable storage device.
  • the technical solution of the present application which is essential or contributes to the prior art, or all or part of the technical solution, may be embodied in the form of a software product stored in a storage device.
  • a number of instructions are included to cause a computer device (which may be a personal computer, server, or network device, etc.) or a processor to perform all or part of the steps of the various embodiments of the present application.
  • the foregoing storage device includes: a USB flash drive, a mobile hard disk, a read-only memory (ROM), and a random access memory (RAM, Random Access).

Abstract

本申请公开了一种智能终端及其感控方法、具有存储功能的装置,该感控方法包括:利用佩戴在用户头部的智能终端显示虚拟现实场景;获取智能终端的位姿数据;根据位姿数据确定用户的视线与虚拟现实场景的交点;针对虚拟现实场景中的处于交点位置处的显示内容执行相应的操作。

Description

智能终端及其感控方法、具有存储功能的装置
本申请要求于2018年3月1日提交中国专利局、申请号为201810170954.5、发明名称为“智能终端及其感控方法、具有存储功能的装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及虚拟现实领域,特别是涉及一种智能终端及其感控方法、具有存储功能的装置。
背景技术
虚拟现实(VR)技术是一种可以创建和体验虚拟世界的计算机仿真系统。它利用计算机模拟产生一个三维的虚拟世界,提供用户关于视觉、听觉、触觉等感官的模拟,让用户身临其境。虚拟现实通常需要头戴式显示器,当然,智能终端也可以通过外设来实现虚拟现实体验,如Google Cardboard以及三星Gear VR等等。
通常需要安装特定的VR应用,同时也需要相应的蓝牙外设来控制设备。由于智能终端上的应用,比如播放器、图库是为触摸屏幕的操作方式而设计的;当使用蓝牙外设进行控制时,有时需要左右移动很多次才能选中某个应用程序图标进行控制;有的界面没有设计应用程序图标获得焦点的状态,导致用户不知道焦点当前移动的位置而无法操作。
技术问题
本申请实施例提供一种智能终端及其感控方法、具有存储功能的装置,通过该感控方法可以更直观、灵活地对虚拟现实场景中的显示内容进行操作,提高了虚拟现实场景中用户操作的便捷性。
技术解决方案
为解决上述技术问题,本申请采用如下技术方案:
第一方面,本申请实施例提供一种基于虚拟现实的感控方法,所述感控方法包括:利用佩戴在用户头部的智能终端显示虚拟现实场景;获取所述智能终端的位姿数据;根据所述位姿数据确定所述用户的视线与所述虚拟现实场景的交点;针对所述虚拟现实场景中的处于所述交点位置处的显示内容执行相应的操作。
其中,所述根据所述位姿数据确定所述用户的视线与所述虚拟现实场景的交点的步骤包括:
确定所述智能终端佩戴在用户头部时所述用户的视线与所述智能终端的预设参考方向之间的变换关系;
根据所述位姿数据确定所述智能终端的预设参考方向的位姿描述;
根据所述预设参考方向的位姿描述和所述变换关系,确定所述用户的视线的位姿描述。
其中,所述根据所述位姿数据确定所述用户的视线与所述虚拟现实场景的交点的步骤包括:
将所述虚拟现实场景映射到空间模型中,其中所述空间模型是在世界坐标系下建立的;
根据所述智能终端的位姿数据计算所述用户的视线在所述世界坐标系下的位姿描述;
根据所述位姿描述确定所述用户的视线与所述空间模型中的虚拟现实场景的交点。
其中,所述针对所述虚拟现实场景中的处于所述交点位置处的显示内容执行相应的操作的步骤包括:
确定所述交点是否与所述虚拟现实场景的图形控件重合;
若与所述图形控件重合,则进一步确定是否接收到触发信号;
若接收到所述触发信号,则向所述图形控件发出触控操作信号。
其中,在所述进一步确定是否接收到触发信号的步骤之后,还包括:
若未接收到所述触发信号,则向所述图形控件发出悬浮操作信号。
其中,所述智能终端设置在一VR眼镜内,所述VR眼镜设置有至少一物理按键,所述物理按键设置于所述智能终端的触控屏幕上,在所述VR眼镜的物理按键被按下时,所述物理按键抵压所述智能终端的触控屏幕;
所述确定是否接收到触发信号的步骤进一步包括:
确定所述智能终端的触控屏幕是否检测到由所述物理按键的抵压所产生的所述触发信号。
其中,所述位姿数据包括位置数据和姿态数据的至少一种或组合。
其中,所述获取所述智能终端的位姿数据的步骤进一步包括:
根据所述智能终端的位姿数据调整所述虚拟现实场景。
第二方面,本申请实施例提供一种智能终端,所述智能终端包括处理器以及与所述处理器耦接的存储器;所述存储器上存储有所述处理器执行的程序指令以及所述处理器执行所述程序指令时所产生的中间数据;所述处理器在执行所述程序指令,能够实现如下步骤:
利用佩戴在用户头部的智能终端显示虚拟现实场景;
获取所述智能终端的位姿数据;
根据所述位姿数据确定所述用户的视线与所述虚拟现实场景的交点;
针对所述虚拟现实场景中的处于所述交点位置处的显示内容执行相应的操作。
其中,所述根据所述位姿数据确定所述用户的视线与所述虚拟现实场景的交点的步骤包括:
将所述虚拟现实场景映射到空间模型中,其中所述空间模型是在世界坐标系下建立的;
根据所述智能终端的位姿数据计算所述用户的视线在所述世界坐标系下的位姿描述;
根据所述位姿描述确定所述用户的视线与所述空间模型中的虚拟现实场景的交点。
其中,所述针对所述虚拟现实场景中的处于所述交点位置处的显示内容执行相应的操作的步骤包括:
确定所述交点是否与所述虚拟现实场景的图形控件重合;
若与所述图形控件重合,则进一步确定是否接收到触发信号;
若接收到所述触发信号,则向所述图形控件发出触控操作信号。
其中,在所述进一步确定是否接收到触发信号的步骤之后,还包括:
若未接收到所述触发信号,则向所述图形控件发出悬浮操作信号。
其中,所述智能终端设置在一VR眼镜内,所述VR眼镜设置有至少一物理按键,所述物理按键设置于所述智能终端的触控屏幕上,在所述VR眼镜的物理按键被按下时,所述物理按键抵压所述智能终端的触控屏幕;
所述确定是否接收到触发信号的步骤进一步包括:
确定所述智能终端的触控屏幕是否检测到由所述物理按键的抵压所产生的所述触发信号。
第三方面,本申请实施例提供一种具有存储功能的装置,其上存储有程序指令,所述程序指令能够被执行以实现如下步骤:
利用佩戴在用户头部的智能终端显示虚拟现实场景;
获取所述智能终端的位姿数据;
根据所述位姿数据确定所述用户的视线与所述虚拟现实场景的交点;
针对所述虚拟现实场景中的处于所述交点位置处的显示内容执行相应的操作。
其中,所述根据所述位姿数据确定所述用户的视线与所述虚拟现实场景的交点的步骤包括:
确定所述智能终端佩戴在用户头部时所述用户的视线与所述智能终端的预设参考方向之间的变换关系;
根据所述位姿数据确定所述智能终端的预设参考方向的位姿描述;
根据所述预设参考方向的位姿描述和所述变换关系,确定所述用户的视线的位姿描述。
其中,所述根据所述位姿数据确定所述用户的视线与所述虚拟现实场景的交点的步骤包括:
将所述虚拟现实场景映射到空间模型中,其中所述空间模型是在世界坐标系下建立的;
根据所述智能终端的位姿数据计算所述用户的视线在所述世界坐标系下的位姿描述;
根据所述位姿描述确定所述用户的视线与所述空间模型中的虚拟现实场景的交点。
其中,所述针对所述虚拟现实场景中的处于所述交点位置处的显示内容执行相应的操作的步骤包括:
确定所述交点是否与所述虚拟现实场景的图形控件重合;
若与所述图形控件重合,则进一步确定是否接收到触发信号;
若接收到所述触发信号,则向所述图形控件发出触控操作信号。
其中,在所述进一步确定是否接收到触发信号的步骤之后,还包括:
若未接收到所述触发信号,则向所述图形控件发出悬浮操作信号。
其中,所述智能终端设置在一VR眼镜内,所述VR眼镜设置有至少一物理按键,所述物理按键设置于所述智能终端的触控屏幕上,在所述VR眼镜的物理按键被按下时,所述物理按键抵压所述智能终端的触控屏幕;
所述确定是否接收到触发信号的步骤进一步包括:
确定所述智能终端的触控屏幕是否检测到由所述物理按键的抵压所产生的所述触发信号。
其中,所述获取所述智能终端的位姿数据的步骤进一步包括:
根据所述智能终端的位姿数据调整所述虚拟现实场景。
有益效果
本申请基于虚拟现实的感控方法包括:利用佩戴在用户头部的智能终端显示虚拟现实场景;获取智能终端的位姿数据;根据位姿数据确定用户的视线与虚拟现实场景的交点;针对虚拟现实场景中的处于交点位置处的显示内容执行相应的操作。本申请的感控方法通过智能终端的位姿数据确定用户视线的方向,由此计算确定用户的视线方向与虚拟现实场景的交点位置,并对该交点位置处的显示内容执行相应的操作。结合人体头部的运动特征,更直观、灵活地对虚拟现实场景中显示内容进行操作,提高了虚拟现实场景中用户操作的便捷性。
附图说明
图1是本申请提供的基于虚拟现实的感控方法一实施方式的流程示意图;
图2是本申请提供的基于虚拟现实的感控方法一具体实施方式的流程示意图;
图3是本申请提供的智能终端一实施方式的结构示意图;
图4是本申请提供的具有存储功能的装置一实施方式的结构示意图。
本发明的实施方式
本申请提供一种智能终端及其感控方法、具有存储功能的装置,为使本申请的目的、技术方案和技术效果更加明确、清楚,以下对本申请进一步详细说明,应当理解此处所描述的具体实施条例仅用于解释本申请,并不用于限定本申请。
参阅图1,图1是本申请基于虚拟现实的感控方法一实施方式的流程示意图。在本实施方式中,该感控方法包括:
步骤101:利用佩戴在用户头部的智能终端显示虚拟现实场景。
在一个具体的应用场景中,智能终端设置在一VR眼镜内,用户佩戴该VR眼镜后可体验虚拟现实场景。
其中,智能终端可以为智能手机。
在本实施方式中,智能终端利用佩戴在用户头部的智能终端显示虚拟现实场景。
步骤102:获取智能终端的位姿数据。
在本实施方式中,当用户的视线发生变化时,用户的头部会跟随用户的视线方向发生相应地运动,进而会带动佩戴在用户头部的智能终端同步运动,例如,用户的头部转动或平移时智能终端也会同步转动或平移。
因此,可以根据智能终端的位姿数据确定用户的视线。智能终端获取智能终端的位姿数据。其中,位姿数据为位置数据和姿态数据的至少一种或组合。
具体地,智能终端通过位置传感器和/或运动传感器获取智能终端的位姿数据。运动传感器包括陀螺仪、加速度计和重力感应器,主要用于监测智能终端的运动,例如,倾斜和摇摆;位置传感器包括地磁场传感器,主要用于监测智能终端的位置,即,智能终端相对于世界坐标系的位置。
在一个具体的应用场景中,当用户的视角发生改变后,智能终端所显示的虚拟现实场景也会相应的发生改变,以增强用户的虚拟现实体验。具体而言,根据智能终端的位姿数据调整虚拟现实场景。例如,用户的视角向右移动时,虚拟现实场景便相应地向左移动;用户的视角向左移动时,虚拟现实场景便相应地向右移动。
步骤103:根据位姿数据确定用户的视线与虚拟现实场景的交点。
在其中的一个实施方式中,智能终端确定智能终端佩戴在用户头部时用户的视线与智能终端的预设参考方向之间的变换关系,并根据智能终端的位姿数据确定智能终端的预设参考方向的位姿描述,根据该预设参考方向的位姿描述和变换关系,确定用户的视线的位姿描述,并根据用户的视线的位姿描述确定用户的视线与虚拟现实场景的交点。
其中,智能终端的预设参考方向为预先定义的方向,可以根据实际情况设计。在其中的一个实施方式中,以智能终端显示屏幕所在的方向为预设参考方向,当然也可以以与智能终端显示屏幕所在的方向垂直的方向为预设参考方向。
在预设参考方向确定了之后,智能终端根据智能终端的位姿数据确定智能终端的预设参考方向的位姿描述,其中,预设参考方向的位姿描述表征智能终端的预设参考方向的平移量或旋转量。例如,当预设参考方向为智能终端显示屏幕所在的方向时,预设参考方向的位姿描述表征智能终端显示屏幕所在的方向的平移量或旋转量。具体地,智能终端将加速度传感器或角速度传感器的检测结果进行时间积分获取智能终端的预设参考方向的位姿描述。
智能终端根据其预设参考方向的位姿描述以及智能终端预设参考方向与用户视线之间的变换关系进而可以确定用户视线的位姿描述,即用户视线的方向。
在另一个实施方式中,智能终端将虚拟现实场景映射到空间模型中,其中空间模型是在世界坐标系下建立的;并根据智能终端的位姿数据计算用户的视线在世界坐标系下的位姿描述;根据位姿描述确定用户的视线与空间模型中的虚拟现实场景的交点。
具体地,智能终端将其传感器初始化,在智能终端接收到显示内容刷新的信号后,智能终端开始绘制显示界面,并读取传感器的初始数据,将虚拟现实场景映射到空间模型中,其中空间模型是在世界坐标系下建立的。并基于传感器的数据调整虚拟现实场景的显示内容,将该显示内容以3D形式显示。
在本实施方式中,智能终端根据旋转矩阵以及智能终端的位姿数据计算用户的视线在世界坐标系下的位姿描述。其中,用户的视线在世界坐标系下的位姿描述反映的是用户的视线在地球或真实环境的视角方向。在其中的一个实施方式中,智能终端为Android系统,智能终端可以依据SensorManager.getRotationMatrix函数确定用户的视线在世界坐标系下的姿描述。
步骤104:针对虚拟现实场景中的处于交点位置处的显示内容执行相应的操作。
在本实施方式中,智能终端针对虚拟现实场景中的处于交点位置处的显示内容执行相应的操作。
具体地,智能终端依据用户的视线的位姿描述确定用户的视线与虚拟现实场景中显示的内容是否存在交点,并确定该交点是否与虚拟现实场景的图形控件重合;若与图形控件重合,则进一步确定是否接收到触发信号;若接收到触发信号,则向图形控件发出触控操作信号。若未接收到触发信号,则向图形控件发出悬浮操作信号。
其中,图形控件可以为某个应用程序对应的图标。
在一个具体的应用场景中,智能终端设置在一VR眼镜内,VR眼镜设置有至少一物理按键,物理按键设置于智能终端的触控屏幕上,在VR眼镜的物理按键被按下时,物理按键抵压智能终端的触控屏幕;智能终端确定智能终端的触控屏幕是否检测到由物理按键的抵压所产生的触发信号,若接收到触发信号,则向图形控件发出触控操作信号。若未接收到触发信号,则向图形控件发出悬浮操作信号。
下面简要说明一下前述的触控操作和悬浮操作。
用户主要通过触控智能终端的屏幕来操作应用程序,智能终端中有一套完整的机制保证操作事件传递到对应的组件。每个组件可以通过注册回调函数获取对屏幕的操作事件,然后执行相应的操作。在本实施方式中,通过用户的视线与虚拟现实场景的交点来选择待操作的图形控件,再根据VR眼镜的物理按键的状态确定对应的操作。
例如,智能终端为Android系统,操作事件封装在MotionEvent函数中,该函数描述了屏幕操作的动作代码和一系列的坐标值。其中,动作代码表示显示屏幕对应的位置被按下或弹起等引起的状态变化;坐标值描述位置变化以及其他的运动信息。
在本实施方式中,若VR眼镜的物理按键被按下,则表征显示屏幕对应的位置被按下或弹起,智能终端执行触控事件。智能终端确定与用户的视线和虚拟现实场景的交点重合的图形控件的坐标值,进而对图形控件执行触控操作,例如打开图形控件或关闭图形控件。
若VR眼镜的物理按键未被按下,则表征显示屏幕对应的位置未被按下或弹起,则智能终端执行悬浮操作。智能终端确定与用户的视线和虚拟现实场景的交点重合的图形控件的坐标值,进而对图形控件执行悬浮操作,即将图形控件显示为Hover(悬浮)状态。
为了更直观的说明上述实施方式的感控方法,请参阅图2,图2是本申请基于虚拟现实的感控方法一具体实施方式的流程示意图。
步骤201:利用佩戴在用户头部的智能终端显示虚拟现实场景。
其中,本步骤与图1中的步骤101相同,具体请参阅步骤101以及相关的文字描述,在此不再赘述。
步骤202:获取智能终端的位姿数据。
其中,本步骤与图1中的步骤102相同,具体请参阅步骤102以及相关的文字描述,在此不再赘述。
步骤203:根据位姿数据确定用户的视线与虚拟现实场景的交点,并确定交点是否与图形控件重合。
在其中的一个实施方式中,智能终端确定智能终端佩戴在用户头部时用户的视线与智能终端的预设参考方向之间的变换关系,并根据智能终端的位姿数据确定智能终端的预设参考方向的位姿描述,根据该预设参考方向的位姿描述和变换关系,确定用户的视线的位姿描述,并根据用户的视线的位姿描述确定用户的视线与虚拟现实场景的交点。
在另一个实施方式中,智能终端将虚拟现实场景映射到空间模型中,其中空间模型是在世界坐标系下建立的;并根据智能终端的位姿数据计算用户的视线在世界坐标系下的位姿描述;根据位姿描述确定用户的视线与空间模型中的虚拟现实场景的交点。
在用户的视线与虚拟现实场景不存在交点时,执行步骤202。
在用户的视线与虚拟现实场景存在交点时,确定该交点是否与虚拟现实场景的图形控件重合。若该交点与虚拟现实场景的任一图形控件均不重合则执行步骤202,若该交点与虚拟现实场景的任一图形控件重合则执行步骤204。
步骤204:确定智能终端的触控屏幕是否检测到由物理按键的抵压所产生的触发信号。
在本实施方式中,智能终端设置在一VR眼镜内,VR眼镜设置有至少一物理按键,物理按键设置于智能终端的触控屏幕上,在VR眼镜的物理按键被按下时,物理按键抵压智能终端的触控屏幕;智能终端确定智能终端的触控屏幕是否检测到由物理按键的抵压所产生的触发信号,若接收到触发信号,则执行步骤206;若未接收到触发信号,则执行步骤205。
步骤205:向图形控件发出悬浮操作信号。
步骤206:向图形控件发出触控操作信号。
其中,步骤203~步骤206与图1中的步骤103和步骤104相对应,具体请参阅步骤步骤103、步骤104以及相关的文字描述,在此不再赘述。
区别于现有技术,本申请基于虚拟现实的感控方法包括:利用佩戴在用户头部的智能终端显示虚拟现实场景;获取智能终端的位姿数据;根据位姿数据确定用户的视线与虚拟现实场景的交点;针对虚拟现实场景中的处于交点位置处的显示内容执行相应的操作。本申请的感控方法通过智能终端的位姿数据确定用户视线的方向,由此计算确定用户的视线方向与虚拟现实场景的交点位置,并对该交点位置处的显示内容执行相应的操作。结合人体头部的运动特征,更直观、灵活地对虚拟现实场景中的显示内容进行操作,提高了虚拟现实场景中用户操作的便捷性。
参阅图3,图3是本申请智能终端一实施方式的结构示意图。在本实施方式中,智能终端30包括处理器31以及与处理器31耦接的存储器32。
其中,智能终端为智能手机。
存储器32上存储有处理器31执行的程序指令以及处理器31执行程序指令时所产生的中间数据。处理器31在执行程序指令,能够实现本申请的基于虚拟现实的感控方法。
该感控方法包括:
在一个具体的应用场景中,智能终端30设置在一VR眼镜内,用户佩戴该VR眼镜后可体验虚拟现实场景。
其中,智能终端30可以为智能手机。
在本实施方式中,处理器31利用佩戴在用户头部的智能终端30显示虚拟现实场景。
在本实施方式中,当用户的视线发生变化时,用户的头部会跟随用户的视线方向发生相应地运动,进而会带动佩戴在用户头部的智能终端30同步运动,例如,用户的头部转动或平移时智能终端30也会同步转动或平移。
因此,可以根据智能终端30的位姿数据确定用户的视线。处理器31获取智能终端30的位姿数据。其中,位姿数据为位置数据和姿态数据的至少一种或组合。
具体地,处理器31通过位置传感器和/或运动传感器获取智能终端30的位姿数据。运动传感器包括陀螺仪、加速度计和重力感应器,主要用于监测智能终端30的运动,例如,倾斜和摇摆;位置传感器包括地磁场传感器,主要用于监测智能终端30的位置,即,智能终端30相对于世界坐标系的位置。
在一个具体的应用场景中,当用户的视角发生改变后,处理器31所显示的虚拟现实场景也会相应的发生改变,以增强用户的虚拟现实体验。具体而言,处理器31根据智能终端30的位姿数据调整虚拟现实场景。例如,用户的视角向右移动时,虚拟现实场景便相应地向左移动;用户的视角向左移动时,虚拟现实场景便相应地向右移动。
在其中的一个实施方式中,处理器31确定智能终端30佩戴在用户头部时用户的视线与智能终端30的预设参考方向之间的变换关系,并根据智能终端30的位姿数据确定智能终端30的预设参考方向的位姿描述,根据该预设参考方向的位姿描述和变换关系,确定用户的视线的位姿描述,并根据用户的视线的位姿描述确定用户的视线与虚拟现实场景的交点。
其中,智能终端30的预设参考方向为预先定义的方向,可以根据实际情况设计。在其中的一个实施方式中,以智能终端30显示屏幕所在的方向为预设参考方向,当然也可以以与智能终端30显示屏幕所在的方向垂直的方向为预设参考方向。
在预设参考方向确定了之后,处理器31根据智能终端30的位姿数据确定智能终端30的预设参考方向的位姿描述,其中,预设参考方向的位姿描述表征智能终端30的预设参考方向的平移量或旋转量。例如,当预设参考方向为智能终端30显示屏幕所在的方向时,预设参考方向的位姿描述表征智能终端30显示屏幕所在的方向的平移量或旋转量。具体地,处理器31将加速度传感器或角速度传感器的检测结果进行时间积分获取智能终端30的预设参考方向的位姿描述。
处理器31根据其预设参考方向的位姿描述以及智能终端30预设参考方向与用户视线之间的变换关系进而可以确定用户视线的位姿描述,即用户视线的方向。
在另一个实施方式中,处理器31将虚拟现实场景映射到空间模型中,其中空间模型是在世界坐标系下建立的;并根据智能终端30的位姿数据计算用户的视线在世界坐标系下的位姿描述;根据位姿描述确定用户的视线与空间模型中的虚拟现实场景的交点。
具体地,处理器31将智能终端30传感器初始化,在处理器31接收到显示内容刷新的信号后,处理器31开始绘制显示界面,并读取传感器的初始数据,将虚拟现实场景映射到空间模型中,其中空间模型是在世界坐标系下建立的。并基于传感器的数据调整虚拟现实场景的显示内容,将该显示内容以3D形式显示。
在本实施方式中,处理器31根据旋转矩阵以及智能终端30的位姿数据计算用户的视线在世界坐标系下的位姿描述。其中,用户的视线在世界坐标系下的位姿描述反映的是用户的视线在地球或真实环境的视角方向。在其中的一个实施方式中,智能终端30为Android系统,处理器31可以依据SensorManager.getRotationMatrix函数确定用户的视线在世界坐标系下的姿描述。
在本实施方式中,处理器31针对虚拟现实场景中的处于交点位置处的显示内容执行相应的操作。
具体地,处理器31依据用户的视线的位姿描述确定用户的视线与虚拟现实场景中显示的内容是否存在交点,并确定该交点是否与虚拟现实场景的图形控件重合;若与图形控件重合,则进一步确定是否接收到触发信号;若接收到触发信号,则向图形控件发出触控操作信号。若未接收到触发信号,则向图形控件发出悬浮操作信号。
其中,图形控件可以为某个应用程序对应的图标。
在一个具体的应用场景中,智能终端30设置在一VR眼镜内,VR眼镜设置有至少一物理按键,物理按键设置于智能终端30的触控屏幕上,在VR眼镜的物理按键被按下时,物理按键抵压智能终端30的触控屏幕;处理器31确定智能终端30的触控屏幕是否检测到由物理按键的抵压所产生的触发信号,若接收到触发信号,则向图形控件发出触控操作信号。若未接收到触发信号,则向图形控件发出悬浮操作信号。
区别于现有技术,本申请基于虚拟现实的感控方法包括:利用佩戴在用户头部的智能终端显示虚拟现实场景;获取智能终端的位姿数据;根据位姿数据确定用户的视线与虚拟现实场景的交点;针对虚拟现实场景中的处于交点位置处的显示内容执行相应的操作。本申请的感控方法通过智能终端的位姿数据确定用户视线的方向,由此计算确定用户的视线方向与虚拟现实场景的交点位置,并对该交点位置处的显示内容执行相应的操作。结合人体头部的运动特征,更直观、灵活地对虚拟现实场景中的显示内容进行操作,提高了虚拟现实场景中用户操作的便捷性。
参阅图4,图4是本申请具有存储功能的装置一实施方式的结构示意图。在本实施方式中,具有存储功能的装置40上存储有程序指令41,程序指令41能够被执行以实现本申请的基于虚拟现实的感控方法。
关于感控方法前述已详尽描述,具体可参阅图1、图2以及相关的文字描述,在此不再赘述。
区别于现有技术,本申请基于虚拟现实的感控方法包括:利用佩戴在用户头部的智能终端显示虚拟现实场景;获取智能终端的位姿数据;根据位姿数据确定用户的视线与虚拟现实场景的交点;针对虚拟现实场景中的处于交点位置处的显示内容执行相应的操作。本申请的感控方法通过智能终端的位姿数据确定用户视线的方向,由此计算确定用户的视线方向与虚拟现实场景的交点位置,并对该交点位置处的显示内容执行相应的操作。结合人体头部的运动特征,更直观、灵活地对虚拟现实场景中的显示内容进行操作,提高了虚拟现实场景中用户操作的便捷性。
在本申请所提供的几个实施例中,应该理解到,所揭露的方法和装置,可以通过其它的方式实现。以上所描述的装置实施方式仅仅是示意性的,例如,模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施方式方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储装置中。
基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储装置中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)或处理器(processor)执行本申请各个实施方式方法的全部或部分步骤。而前述的存储装置包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的装置。
以上仅为本申请的实施方式,并非因此限制本申请的专利保护范围,凡是利用本申请说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本申请的专利保护范围内。

Claims (20)

  1. 一种基于虚拟现实的感控方法,其中,所述感控方法包括:
    利用佩戴在用户头部的智能终端显示虚拟现实场景;
    获取所述智能终端的位姿数据;
    根据所述位姿数据确定所述用户的视线与所述虚拟现实场景的交点;
    针对所述虚拟现实场景中的处于所述交点位置处的显示内容执行相应的操作。
  2. 根据权利要求1所述的感控方法,其中,所述根据所述位姿数据确定所述用户的视线与所述虚拟现实场景的交点的步骤包括:
    确定所述智能终端佩戴在用户头部时所述用户的视线与所述智能终端的预设参考方向之间的变换关系;
    根据所述位姿数据确定所述智能终端的预设参考方向的位姿描述;
    根据所述预设参考方向的位姿描述和所述变换关系,确定所述用户的视线的位姿描述。
  3. 根据权利要求1所述的感控方法,其中,所述根据所述位姿数据确定所述用户的视线与所述虚拟现实场景的交点的步骤包括:
    将所述虚拟现实场景映射到空间模型中,其中所述空间模型是在世界坐标系下建立的;
    根据所述智能终端的位姿数据计算所述用户的视线在所述世界坐标系下的位姿描述;
    根据所述位姿描述确定所述用户的视线与所述空间模型中的虚拟现实场景的交点。
  4. 根据权利要求1所述的感控方法,其中,所述针对所述虚拟现实场景中的处于所述交点位置处的显示内容执行相应的操作的步骤包括:
    确定所述交点是否与所述虚拟现实场景的图形控件重合;
    若与所述图形控件重合,则进一步确定是否接收到触发信号;
    若接收到所述触发信号,则向所述图形控件发出触控操作信号。
  5. 根据权利要求4所述的感控方法,其中,在所述进一步确定是否接收到触发信号的步骤之后,还包括:
    若未接收到所述触发信号,则向所述图形控件发出悬浮操作信号。
  6. 根据权利要求4所述的感控方法,其中,所述智能终端设置在一VR眼镜内,所述VR眼镜设置有至少一物理按键,所述物理按键设置于所述智能终端的触控屏幕上,在所述VR眼镜的物理按键被按下时,所述物理按键抵压所述智能终端的触控屏幕;
    所述确定是否接收到触发信号的步骤进一步包括:
    确定所述智能终端的触控屏幕是否检测到由所述物理按键的抵压所产生的所述触发信号。
  7. 根据权利要求1所述的感控方法,其中,所述位姿数据包括位置数据和姿态数据的至少一种或组合。
  8. 根据权利要求1所述的感控方法,其中,所述获取所述智能终端的位姿数据的步骤进一步包括:
    根据所述智能终端的位姿数据调整所述虚拟现实场景。
  9. 一种智能终端,其中,所述智能终端包括处理器以及与所述处理器耦接的存储器;
    所述存储器上存储有所述处理器执行的程序指令以及所述处理器执行所述程序指令时所产生的中间数据;
    所述处理器在执行所述程序指令,能够实现如下步骤:
    利用佩戴在用户头部的智能终端显示虚拟现实场景;
    获取所述智能终端的位姿数据;
    根据所述位姿数据确定所述用户的视线与所述虚拟现实场景的交点;
    针对所述虚拟现实场景中的处于所述交点位置处的显示内容执行相应的操作。
  10. 根据权利要求9所述的智能终端,其中,所述根据所述位姿数据确定所述用户的视线与所述虚拟现实场景的交点的步骤包括:
    将所述虚拟现实场景映射到空间模型中,其中所述空间模型是在世界坐标系下建立的;
    根据所述智能终端的位姿数据计算所述用户的视线在所述世界坐标系下的位姿描述;
    根据所述位姿描述确定所述用户的视线与所述空间模型中的虚拟现实场景的交点。
  11. 根据权利要求9所述的智能终端,其中,所述针对所述虚拟现实场景中的处于所述交点位置处的显示内容执行相应的操作的步骤包括:
    确定所述交点是否与所述虚拟现实场景的图形控件重合;
    若与所述图形控件重合,则进一步确定是否接收到触发信号;
    若接收到所述触发信号,则向所述图形控件发出触控操作信号。
  12. 根据权利要求11所述的智能终端,其中,在所述进一步确定是否接收到触发信号的步骤之后,还包括:
    若未接收到所述触发信号,则向所述图形控件发出悬浮操作信号。
  13. 根据权利要求11所述的智能终端,其中,所述智能终端设置在一VR眼镜内,所述VR眼镜设置有至少一物理按键,所述物理按键设置于所述智能终端的触控屏幕上,在所述VR眼镜的物理按键被按下时,所述物理按键抵压所述智能终端的触控屏幕;
    所述确定是否接收到触发信号的步骤进一步包括:
    确定所述智能终端的触控屏幕是否检测到由所述物理按键的抵压所产生的所述触发信号。
  14. 一种具有存储功能的装置,其中,其上存储有程序指令,所述程序指令能够被执行以实现如下步骤:
    利用佩戴在用户头部的智能终端显示虚拟现实场景;
    获取所述智能终端的位姿数据;
    根据所述位姿数据确定所述用户的视线与所述虚拟现实场景的交点;
    针对所述虚拟现实场景中的处于所述交点位置处的显示内容执行相应的操作。
  15. 根据权利要求14所述的具有存储功能的装置,其中,所述根据所述位姿数据确定所述用户的视线与所述虚拟现实场景的交点的步骤包括:
    确定所述智能终端佩戴在用户头部时所述用户的视线与所述智能终端的预设参考方向之间的变换关系;
    根据所述位姿数据确定所述智能终端的预设参考方向的位姿描述;
    根据所述预设参考方向的位姿描述和所述变换关系,确定所述用户的视线的位姿描述。
  16. 根据权利要求14所述的具有存储功能的装置,其中,所述根据所述位姿数据确定所述用户的视线与所述虚拟现实场景的交点的步骤包括:
    将所述虚拟现实场景映射到空间模型中,其中所述空间模型是在世界坐标系下建立的;
    根据所述智能终端的位姿数据计算所述用户的视线在所述世界坐标系下的位姿描述;
    根据所述位姿描述确定所述用户的视线与所述空间模型中的虚拟现实场景的交点。
  17. 根据权利要求14所述的具有存储功能的装置,其中,所述针对所述虚拟现实场景中的处于所述交点位置处的显示内容执行相应的操作的步骤包括:
    确定所述交点是否与所述虚拟现实场景的图形控件重合;
    若与所述图形控件重合,则进一步确定是否接收到触发信号;
    若接收到所述触发信号,则向所述图形控件发出触控操作信号。
  18. 根据权利要求17所述的具有存储功能的装置,其中,在所述进一步确定是否接收到触发信号的步骤之后,还包括:
    若未接收到所述触发信号,则向所述图形控件发出悬浮操作信号。
  19. 根据权利要求17所述的具有存储功能的装置,其中,所述智能终端设置在一VR眼镜内,所述VR眼镜设置有至少一物理按键,所述物理按键设置于所述智能终端的触控屏幕上,在所述VR眼镜的物理按键被按下时,所述物理按键抵压所述智能终端的触控屏幕;
    所述确定是否接收到触发信号的步骤进一步包括:
    确定所述智能终端的触控屏幕是否检测到由所述物理按键的抵压所产生的所述触发信号。
  20. 根据权利要求14所述的具有存储功能的装置,其中,所述获取所述智能终端的位姿数据的步骤进一步包括:
    根据所述智能终端的位姿数据调整所述虚拟现实场景。
PCT/CN2019/076648 2018-03-01 2019-03-01 智能终端及其感控方法、具有存储功能的装置 WO2019166005A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US16/976,773 US20210041942A1 (en) 2018-03-01 2019-03-01 Sensing and control method based on virtual reality, smart terminal, and device having storage function
EP19761491.0A EP3761154A4 (en) 2018-03-01 2019-03-01 INTELLIGENT TERMINAL DEVICE, SENSOR CONTROL PROCESS FOR IT AND DEVICE WITH MEMORY FUNCTION

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810170954.5A CN108614637A (zh) 2018-03-01 2018-03-01 智能终端及其感控方法、具有存储功能的装置
CN201810170954.5 2018-03-01

Publications (1)

Publication Number Publication Date
WO2019166005A1 true WO2019166005A1 (zh) 2019-09-06

Family

ID=63658355

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/076648 WO2019166005A1 (zh) 2018-03-01 2019-03-01 智能终端及其感控方法、具有存储功能的装置

Country Status (4)

Country Link
US (1) US20210041942A1 (zh)
EP (1) EP3761154A4 (zh)
CN (1) CN108614637A (zh)
WO (1) WO2019166005A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113608616A (zh) * 2021-08-10 2021-11-05 深圳市慧鲤科技有限公司 虚拟内容的显示方法及装置、电子设备和存储介质

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108614637A (zh) * 2018-03-01 2018-10-02 惠州Tcl移动通信有限公司 智能终端及其感控方法、具有存储功能的装置
CN110308794A (zh) * 2019-07-04 2019-10-08 郑州大学 具有两种显示模式的虚拟现实头盔及显示模式的控制方法
CN111651069A (zh) * 2020-06-11 2020-09-11 浙江商汤科技开发有限公司 虚拟沙盘的展示方法、装置、电子设备及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5876607B1 (ja) * 2015-06-12 2016-03-02 株式会社コロプラ フローティング・グラフィカルユーザインターフェース
CN105912110A (zh) * 2016-04-06 2016-08-31 北京锤子数码科技有限公司 一种在虚拟现实空间中进行目标选择的方法、装置及系统
CN106681506A (zh) * 2016-12-26 2017-05-17 惠州Tcl移动通信有限公司 一种终端设备中非vr应用的交互方法及终端设备
CN107728776A (zh) * 2016-08-11 2018-02-23 成都五维译鼎科技有限公司 信息采集的方法、装置、终端及系统及用户终端
CN108614637A (zh) * 2018-03-01 2018-10-02 惠州Tcl移动通信有限公司 智能终端及其感控方法、具有存储功能的装置

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10078377B2 (en) * 2016-06-09 2018-09-18 Microsoft Technology Licensing, Llc Six DOF mixed reality input by fusing inertial handheld controller with hand tracking

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5876607B1 (ja) * 2015-06-12 2016-03-02 株式会社コロプラ フローティング・グラフィカルユーザインターフェース
CN105912110A (zh) * 2016-04-06 2016-08-31 北京锤子数码科技有限公司 一种在虚拟现实空间中进行目标选择的方法、装置及系统
CN107728776A (zh) * 2016-08-11 2018-02-23 成都五维译鼎科技有限公司 信息采集的方法、装置、终端及系统及用户终端
CN106681506A (zh) * 2016-12-26 2017-05-17 惠州Tcl移动通信有限公司 一种终端设备中非vr应用的交互方法及终端设备
CN108614637A (zh) * 2018-03-01 2018-10-02 惠州Tcl移动通信有限公司 智能终端及其感控方法、具有存储功能的装置

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3761154A4

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113608616A (zh) * 2021-08-10 2021-11-05 深圳市慧鲤科技有限公司 虚拟内容的显示方法及装置、电子设备和存储介质

Also Published As

Publication number Publication date
EP3761154A4 (en) 2022-01-05
US20210041942A1 (en) 2021-02-11
CN108614637A (zh) 2018-10-02
EP3761154A1 (en) 2021-01-06

Similar Documents

Publication Publication Date Title
US11221730B2 (en) Input device for VR/AR applications
US20220121344A1 (en) Methods for interacting with virtual controls and/or an affordance for moving virtual objects in virtual environments
EP3250983B1 (en) Method and system for receiving gesture input via virtual control objects
CN107771309B (zh) 处理三维用户输入的方法
CN108475120B (zh) 用混合现实系统的远程设备进行对象运动跟踪的方法及混合现实系统
CN107810465B (zh) 用于产生绘制表面的系统和方法
CN110633008B (zh) 用户交互解释器
WO2019166005A1 (zh) 智能终端及其感控方法、具有存储功能的装置
US11379033B2 (en) Augmented devices
EP3814876B1 (en) Placement and manipulation of objects in augmented reality environment
KR102021851B1 (ko) 가상현실 환경에서의 사용자와 객체 간 상호 작용 처리 방법
JP2014531693A (ja) 動き制御されたリストスクローリング
WO2019095360A1 (zh) 虚拟场景中菜单处理方法、装置及存储介质
TW202138971A (zh) 互動方法、裝置、互動系統、電子設備及存儲介質
US20230100689A1 (en) Methods for interacting with an electronic device
TW202328872A (zh) 元宇宙內容模態映射
CN112534390A (zh) 用于提供虚拟输入工具的电子装置及其方法
WO2016102948A1 (en) Coherent touchless interaction with stereoscopic 3d images
CN118012265A (zh) 人机交互方法、装置、设备和介质
CN117695648A (zh) 虚拟角色的移动和视角控制方法、装置、电子设备和介质
WO2024091371A1 (en) Artificial reality input for two-dimensional virtual objects
CN118022307A (zh) 调整虚拟对象位置的方法、装置、设备、介质和程序产品

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19761491

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 2019761491

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 2019761491

Country of ref document: EP

Effective date: 20201001