WO2018196552A1 - 用于虚拟现实场景中的手型显示方法及装置 - Google Patents

用于虚拟现实场景中的手型显示方法及装置 Download PDF

Info

Publication number
WO2018196552A1
WO2018196552A1 PCT/CN2018/081258 CN2018081258W WO2018196552A1 WO 2018196552 A1 WO2018196552 A1 WO 2018196552A1 CN 2018081258 W CN2018081258 W CN 2018081258W WO 2018196552 A1 WO2018196552 A1 WO 2018196552A1
Authority
WO
WIPO (PCT)
Prior art keywords
virtual
hand
hand object
menu
item
Prior art date
Application number
PCT/CN2018/081258
Other languages
English (en)
French (fr)
Inventor
沈超
王学强
王洪浩
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN201710278577.2A external-priority patent/CN107132917B/zh
Priority claimed from CN201710292385.7A external-priority patent/CN107168530A/zh
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Publication of WO2018196552A1 publication Critical patent/WO2018196552A1/zh
Priority to US16/509,038 priority Critical patent/US11194400B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/21Input arrangements for video game devices characterised by their sensors, purposes or types
    • A63F13/211Input arrangements for video game devices characterised by their sensors, purposes or types using inertial sensors, e.g. accelerometers or gyroscopes
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/21Input arrangements for video game devices characterised by their sensors, purposes or types
    • A63F13/212Input arrangements for video game devices characterised by their sensors, purposes or types using sensors worn by the player, e.g. for measuring heart beat or leg activity
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/21Input arrangements for video game devices characterised by their sensors, purposes or types
    • A63F13/213Input arrangements for video game devices characterised by their sensors, purposes or types comprising photodetecting means, e.g. cameras, photodiodes or infrared cells
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/25Output arrangements for video game devices
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/40Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
    • A63F13/42Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
    • A63F13/428Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle involving motion or position input signals, e.g. signals representing the rotation of an input controller or a player's arm motions sensed by accelerometers or gyroscopes
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • A63F13/525Changing parameters of virtual cameras
    • A63F13/5255Changing parameters of virtual cameras according to dedicated instructions from a player, e.g. using a secondary joystick to rotate the camera around a player's character
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0346Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0482Interaction with lists of selectable items, e.g. menus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/80Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game specially adapted for executing a specific type of game
    • A63F2300/8082Virtual reality

Definitions

  • the embodiments of the present application relate to the field of virtual reality (VR), and in particular, to a hand display method and apparatus for use in a virtual reality scene.
  • VR virtual reality
  • the VR handle is provided with a button corresponding to the finger
  • the virtual hand is provided with a virtual hand in the virtual environment, and the position of the virtual hand moves along with the movement of the VR handle.
  • the finger presses the button the finger of the virtual hand in the virtual environment is collapsed; when the finger releases the button, the finger of the virtual hand in the virtual environment is tilted.
  • the virtual hand in the virtual scene is in contact with the virtual item, if the thumb and the index finger simultaneously press the corresponding buttons, the virtual hand can capture the virtual item in the virtual environment in the palm.
  • the above interaction mode is a near field interaction mode.
  • the virtual hand can only be moved to a position in contact with the virtual item by moving the VR handle. Thereby achieving the capture of the virtual item.
  • the embodiment of the present application provides a hand display method and device for use in a virtual reality scene, which can solve the problem that the virtual item can only be captured by the virtual hand and the virtual item in the VR system.
  • the technical solution is as follows:
  • a hand display method for use in a virtual reality scenario for use in a virtual reality host, the method comprising:
  • the first hand object including a ray extending along a front of the finger and hiding the display, the first hand object referring to a virtual hand holding the virtual item or not indicating the virtual to be held a pose animation of an item, the virtual item being a virtual item that the virtual hand can pick up, hold, and drop;
  • the second hand object when the ray is intersected with the virtual item according to an input signal sent by the input device, the second hand object including a ray extending along the front of the finger and being explicitly displayed;
  • a third hand object is displayed, the third hand object being a hand object holding the virtual item.
  • an object processing method in a virtual scene is provided for use in a virtual reality host, and the method includes:
  • Detecting a second operation performed on the first target object wherein the second operation is for indicating moving the second target object to a target in the at least one first menu object in the virtual scene The location of the menu object;
  • the target processing operation is a processing operation corresponding to the target menu object, each of the at least one first menu object being first The menu object corresponds to a processing operation.
  • the method further includes:
  • the at least one first menu object is deleted in the virtual scene in response to the third operation.
  • the method before the performing the target processing operation in the virtual scenario in response to the second operation, the method further includes:
  • a flag is set for the target menu object in the virtual scene in response to the second operation, wherein the flag is used to indicate that the second target object moves to a location where the target menu object is located.
  • the generating, by the first operation, the at least one first menu object corresponding to the second target object in the virtual scenario comprises:
  • the at least one first menu object corresponding to the target scene is generated around the second target object in the virtual scene according to a correspondence between a predetermined virtual scene and a menu object.
  • the generating, by the first operation, the at least one first menu object corresponding to the second target object in the virtual scene includes at least one of the following:
  • the predetermined circumference is a circumference formed by a center of a position of the second target object and a predetermined distance is a radius;
  • the predetermined direction comprises at least one of: upper, lower, left, and right
  • the predetermined The arrangement order includes at least one of the following: a linear arrangement order, and a curve arrangement order.
  • the performing the target processing operation in the virtual scenario in response to the second operation comprises at least one of the following:
  • a hand display device for use in a virtual reality scene, the device comprising:
  • a first display module configured to display a first hand object
  • the first hand object includes a ray extending along a front of the finger and hidden display, the first hand object means that the virtual hand does not hold the virtual item or a gesture animation when the virtual item to be held is not indicated, the virtual item being a virtual item that the virtual hand can pick up, hold, and put down;
  • a second display module configured to display a second hand object when the ray is intersected with the virtual item according to an input signal sent by the input device, where the second hand object includes extending along the front of the finger and being explicit Displayed rays;
  • a third display module configured to display a third hand object when the selection instruction is received, where the third hand object is a hand object that holds the virtual item.
  • an object processing apparatus in a virtual scene comprising:
  • a first detecting unit configured to detect a first operation performed on the first target object in the real scene
  • a first response unit configured to generate at least one first menu object corresponding to the second target object in the virtual scene in response to the first operation, wherein the second target object is the first target object in the The virtual object corresponding to the virtual scene;
  • a second detecting unit configured to detect a second operation performed on the first target object, wherein the second operation is used to indicate that the second target object is moved to the at least one in the virtual scene The location of the target menu object in the first menu object;
  • a second response unit configured to perform a target processing operation in the virtual scene in response to the second operation, wherein the target processing operation is a processing operation corresponding to the target menu object, the at least one first menu Each first menu object in the object corresponds to a processing operation.
  • the device further includes:
  • a third detecting unit configured to detect a third operation performed on the first target object after the generating, by the first operation, the at least one first menu object corresponding to the second target object in the virtual scene;
  • the third response unit deletes the at least one first menu object in the virtual scene in response to the third operation.
  • the device further includes:
  • a fourth response unit configured to set a flag for the target menu object in the virtual scene in response to the second operation, before the performing a target processing operation in the virtual scene in response to the second operation
  • the flag is used to indicate that the second target object moves to a location where the target menu object is located.
  • the first response unit includes:
  • An acquiring module configured to acquire a current target scene in the virtual scene when the first operation is detected
  • a first generating module configured to generate, according to a correspondence between a predetermined virtual scene and a menu object, the at least one first menu object corresponding to the target scene, around the second target object in the virtual scene .
  • the first response unit comprises at least one of the following modules:
  • a second generating module configured to generate the at least one first menu object according to a predetermined interval on a predetermined circumference, wherein the predetermined circumference is formed by a center of a position of the second target object, and a predetermined distance is a radius circumference;
  • a third generation module configured to generate the at least one first menu object in a predetermined arrangement order in a predetermined direction of the second target object, wherein the predetermined direction comprises at least one of: upper, lower, left On the right side, the predetermined arrangement order includes at least one of the following: a linear arrangement order and a curve arrangement order.
  • the second response unit comprises at least one of the following modules:
  • a fourth generation module configured to generate at least one second menu object in the virtual scene, where the at least one second menu object is a drop-down menu object of the target menu object;
  • a switching module configured to switch the first scene in the virtual scene to the second scene
  • a setting module configured to set an attribute of the operation object in the virtual scene as a target attribute
  • control module configured to control an operation object in the virtual scene to perform a target task.
  • a virtual reality host including a processor and a memory, the memory storing at least one instruction, the at least one instruction being loaded and executed by the processor to implement the above aspect
  • a hand display method for use in a virtual reality scene is provided; or an object processing method in the virtual scene provided by the above aspect is implemented.
  • a computer readable storage medium having stored therein at least one instruction loaded by the processor and executed to implement the virtual reality scenario provided by the above aspect
  • FIG. 1A is a schematic structural diagram of a hand display system for use in a virtual reality scenario according to an embodiment of the present application
  • FIG. 1B is a schematic structural diagram of an input device used in a virtual reality scenario according to an embodiment of the present application
  • FIG. 2 is a flowchart of a hand display method for a virtual reality scene provided by an embodiment of the present application
  • FIG. 3 is a schematic diagram of a first hand object in a virtual reality scenario provided by an embodiment of the present application.
  • FIG. 4 is a schematic diagram of a second hand object and a virtual item in a virtual reality scene provided by an embodiment of the present application
  • FIG. 5 is a schematic diagram of switching a second hand object to a third hand object in a virtual reality scene according to an embodiment of the present application
  • FIG. 6 is a schematic diagram of a third hand object in a virtual reality scenario provided by an embodiment of the present application.
  • FIG. 7 is a schematic diagram of a third hand object in a virtual reality scenario provided by an embodiment of the present application.
  • FIG. 8 is a schematic diagram of a third hand object in a virtual reality scenario according to an embodiment of the present application.
  • FIG. 9 is a schematic diagram of a third hand object in a virtual reality scenario provided by an embodiment of the present application.
  • FIG. 10 is a schematic diagram of switching a third hand object to a first hand object in a virtual reality scene according to an embodiment of the present application
  • FIG. 11 is a schematic diagram of switching a third hand object to a first hand object in a virtual reality scene according to an embodiment of the present application
  • FIG. 12 is a schematic diagram of a fourth hand object in a virtual reality scenario according to an embodiment of the present application.
  • FIG. 13 is a schematic diagram of a fifth hand object in a virtual reality scenario according to an embodiment of the present application.
  • FIG. 14 is a schematic diagram of switching a first hand object to a third hand object in a virtual reality scene according to an embodiment of the present application.
  • FIG. 15 is a flowchart of a hand display method for a virtual reality scene according to another embodiment of the present application.
  • 16 is a schematic diagram of a hardware environment of an object processing method in a virtual scene according to an embodiment of the present application
  • 17 is a flowchart of an object processing method in an optional virtual scenario according to an embodiment of the present application.
  • FIG. 18 is a schematic diagram of an optional hand menu in a virtual reality environment according to an embodiment of the present application.
  • FIG. 19 is a schematic diagram of another optional hand menu in a virtual reality environment according to an embodiment of the present application.
  • 20 is a schematic diagram of an optional hand menu in a virtual reality environment according to an embodiment of the present application.
  • 21 is a schematic diagram of an optional menu control logic in accordance with an embodiment of the present application.
  • FIG. 22 is a block diagram of a hand type display device for use in a virtual reality scene according to an embodiment of the present application
  • FIG. 23 is a schematic diagram of an object processing apparatus in an optional virtual scenario according to an embodiment of the present application.
  • FIG. 24 is a schematic diagram of an object processing apparatus in another optional virtual scenario according to an embodiment of the present application.
  • 25 is a schematic diagram of an object processing apparatus in another optional virtual scene according to an embodiment of the present application.
  • FIG. 26 is a schematic diagram of an object processing apparatus in another optional virtual scene according to an embodiment of the present application.
  • FIG. 27 is a schematic diagram of an object processing apparatus in another optional virtual scene according to an embodiment of the present application.
  • FIG. 28 is a schematic diagram of an object processing apparatus in another optional virtual scene according to an embodiment of the present application.
  • FIG. 29 is a block diagram of a VR system provided by an embodiment of the present application.
  • FIG. 30 is a structural block diagram of a terminal according to an embodiment of the present application.
  • VR virtual reality
  • virtual technology also known as virtual environment
  • VR virtual reality
  • virtual technology also known as virtual environment
  • virtual environment is the virtual world that uses computer simulation to generate a three-dimensional space, providing users with simulations of visual and other senses, so that users feel as if Being immersed in the situation, you can observe things in three dimensions in a timely and unrestricted manner.
  • the development kit includes a head-mounted display, two single-handheld controllers, and a positioning system that simultaneously tracks the display and controller in space.
  • Oculus is a US virtual reality technology company founded by Palmer Ritchie and Brendan Iribe. Their first product, the Oculus Rift, is a realistic virtual reality head-mounted display.
  • Oculus Touch Oculus Rift's motion capture handle, used in conjunction with the space positioning system, the Oculus Touch uses a bracelet-like design that allows the camera to track the user's hand, and the sensor can also track finger movements while also bringing the user Convenient grip.
  • FIG. 1A is a schematic structural diagram of a VR system according to an embodiment of the present application.
  • the VR system includes a head mounted display 120, a virtual reality host 140, and an input device 160.
  • the head mounted display 120 is a display for wearing an image display on the user's head.
  • the head mounted display 120 generally includes a wearing portion including a temple for wearing the head mounted display 120 on the head of the user and an elastic band, and the display portion including a left eye display and a right eye display.
  • the head mounted display 120 is capable of displaying different images on the left eye display and the right eye display to simulate a three dimensional virtual environment for the user.
  • the head mounted display 120 is provided with a motion sensor for capturing a user's head motion, so that the virtual reality host 140 changes the display screen of the virtual head in the head mounted display 120.
  • the head mounted display 120 is electrically coupled to the virtual reality host 140 via a flexible circuit board or hardware interface or data line or wireless network.
  • the virtual reality host 140 is configured to model a three-dimensional virtual environment, generate a three-dimensional display image corresponding to the three-dimensional virtual environment, generate a virtual object in the three-dimensional virtual environment, and the like.
  • the virtual reality host 140 can also model a two-dimensional virtual environment, generate a two-dimensional display image corresponding to the two-dimensional virtual environment, and generate a virtual object in the two-dimensional virtual environment; or, the virtual reality host 140 can model the three-dimensional virtual environment.
  • the two-dimensional display screen corresponding to the three-dimensional virtual environment and the two-dimensional projection image of the virtual object in the three-dimensional virtual environment are generated according to the user's perspective position, which is not limited in this embodiment.
  • the virtual reality host 140 may be integrated in the interior of the head mounted display 120 or integrated in other devices than the head mounted display 120, which is not limited in this embodiment.
  • the virtual reality host 140 is integrated into other devices different from the head mounted display 120 as an example for description.
  • the other device may be a desktop computer or a server, etc., which is not limited in this embodiment. That is, the virtual reality host 140 may be part of the head mounted display 120 (software, hardware, or a combination of hardware and software) in actual implementation; or it may be a terminal; or it may be a server.
  • the virtual reality host 140 receives an input signal from the input device 160 and generates a display screen of the head mounted display 120 based on the input signal.
  • the virtual reality host 140 is typically implemented by electronics such as a processor, memory, image virtual reality host, etc. disposed on a circuit board.
  • the virtual reality host 140 further includes an image capture device for capturing a user's head motion and changing a display screen of the virtual head in the head mounted display 120 according to the user's head motion.
  • the virtual reality host 140 is connected to the input device 160 via a cable, Bluetooth connection, or Wi-Fi (Wireless-Fidelity) connection.
  • the input device 160 can be at least one input peripheral of a somatosensory glove, a somatosensory handle, a remote control, a treadmill, a mouse, a keyboard, and a human eye focusing device.
  • the input device 160 may also be referred to as a different name, such as a first target object, a first target device, and the like.
  • the input device 160 is used as a device that requires two-handed manipulation by the user.
  • the input device 160 is a somatosensory glove or a somatosensory handle.
  • the user's two-handed input device 160 generally includes a pair of VR handles 161, 162.
  • the left and right hands respectively control one of a pair of VR handles, such as: the left hand controls the VR handle 161, and the right hand controls VR handle 162.
  • a physical button 163 and a motion sensor 164 are disposed in the VR handle 161 or 162.
  • motion sensor 164 is disposed inside the VR handle.
  • the physical button 163 is used to receive an operation command triggered by the user.
  • the position of the physical button 163 disposed on each VR handle is determined according to the position of the finger when the user holds the VR handle. For example, in FIG. 1B, the position where the user's right thumb is located is provided with a physical button 1631, and the position where the index finger is located is provided with a physical button 1632.
  • a plurality of physical buttons 163 may be disposed on the area where the same finger acts, for example, three physical buttons 163 are disposed in the active area of the thumb, and the three physical buttons 163 are all controlled by the thumb; or, multiple Only one physical button 163 may be disposed in the area where the finger acts. For example, only one physical button 163 is disposed in the area where the ring finger and the little finger act, and the physical button 163 may be controlled by the ring finger or by the little finger. This embodiment does not limit the number of physical keys set on each VR handle.
  • the physical button 163 implements at least one of a selection function, a view menu bar function, and a pressure detection function.
  • Each function can be implemented by one physical button or by multiple physical buttons.
  • the VR handle 161 is provided with a joystick, a trigger button, a menu bar button and a pressure detection button.
  • the menu bar button When the menu bar button is pressed, the services provided by the VR system are displayed (implementing the function of viewing the menu bar), for example: a game option or various movie options, etc.; when the trigger button is pressed, displaying the game selected by the user (selecting a function through a single physical button), for example, playing table tennis, displaying a virtual environment for playing table tennis;
  • the virtual hand selects the virtual table tennis in the virtual environment (the selection function is realized by two physical buttons); when the pressure detection button is pressed, according to the pressure detection
  • the pressure data detected by the button adjusts the strength of the virtual hand grabbing virtual table tennis (to achieve the pressure detection function).
  • the physical button 163 can also be used to implement other functions, such as a system button for implementing the function of activating and/or deactivating the input device 160, a grip button for implementing a function of detecting whether the user is holding the input device 160. Etc., this embodiment is not listed here one by one.
  • the functions of the physical buttons 163 provided on different VR handles may be the same or different, which is not limited in this embodiment.
  • the physical button 1632 for implementing the selection function is disposed on the VR handle 161, and the physical button 1632 is not disposed on the VR handle 162.
  • buttons 163 may be implemented as a virtual button implemented by using a touch screen, which is not limited in this embodiment.
  • the function implemented by the physical button 163 can be implemented by a physical button disposed on the somatosensory glove, or can be implemented by controlling the body glove to form a preset function gesture. This example does not limit this.
  • Motion sensor 164 is used to acquire the spatial pose of input device 160.
  • the motion sensor 164 may be any one of a gravity acceleration sensor 1641, a gyro sensor 1642, or a distance sensor 1643.
  • the gravity acceleration sensor 1641 can detect the magnitude of the acceleration in each direction (generally three axes), and the magnitude and direction of the gravity can be detected at rest, so that the virtual reality host 140 outputs according to the gravity acceleration sensor 1641.
  • the data can determine the posture, moving direction and moving distance of the user's real hand, so that the head mounted display 120 moves the virtual hand according to the moving direction and the moving distance determined by the virtual reality host 140 in the displayed virtual environment.
  • the gyro sensor 1642 can detect the magnitude of the angular velocity in each direction, and detect the rotation motion of the input device 160.
  • the virtual reality host 140 can determine the rotation direction and the rotation angle of the real hand of the user according to the data output by the gyro sensor.
  • the head mounted display 120 rotates the virtual hand in accordance with the rotation direction and the rotation angle determined by the virtual reality host 140 in the displayed virtual environment.
  • the virtual reality host 140 may also determine the rotation direction and the rotation angle of the virtual item selected by the virtual hand according to the data output by the gyro sensor.
  • the distance sensor 1643 can determine the distance of the user's finger from the input device 160.
  • the virtual reality host 140 can determine the user's hand object according to the data output by the distance sensor 1643, so that the head mounted display 120 is in the displayed virtual environment.
  • the hand type object determined by the virtual reality host 140 displays the hand object of the virtual hand. For example, if the virtual reality host 140 determines that the user's hand object is a "thumbs up" according to the distance sensor, then the head mounted display 120 displays the virtual hand's hand object as a "thumbs up".
  • the number of the types of the motion sensors 164 may be one or more, which is not limited in this embodiment.
  • the hand object is a display object for presenting the posture shape of the virtual hand.
  • the virtual reality host 140 creates a pre-defined virtual hand pose animation by setting a skeleton of the hand on a complete mesh model.
  • the posture animation of the virtual hand may be a two-dimensional posture animation created according to a three-dimensional mesh model, or may be a three-dimensional posture animation created according to a three-dimensional mesh model, or may be a two-dimensional hand.
  • the two-dimensional form animation of the part model is not limited in this embodiment.
  • the hand object may be composed of a frame of fixed (still) gesture animation, or may be composed of multiple frames of motion animation with dynamic effects.
  • the input device 160 may further include other components, such as a processor 165 for controlling each component, a communication component 166 for communicating with the virtual reality host 140, and the like, which are not limited in this embodiment.
  • a processor 165 for controlling each component
  • a communication component 166 for communicating with the virtual reality host 140, and the like, which are not limited in this embodiment.
  • FIG. 2 is a flowchart of a hand display method for a virtual reality scene provided by an embodiment of the present application. This embodiment is illustrated by using the hand display method for the virtual reality scene in the VR system shown in FIG. 1 .
  • the method can include the following steps:
  • Step 201 Display a first hand object, the first hand object including a ray extending along the front of the finger and hidden.
  • the first hand object refers to a gesture animation of the virtual hand in an idle state. That is, the gesture animation when the virtual hand does not hold the virtual item, or the virtual hand does not indicate the virtual item to be held.
  • the first hand object is represented by a five-finger open gesture and a slightly natural curved posture animation of each finger; or, by means of a gesture animation of the fist fist, the embodiment does not perform the specific pose animation of the first hand type object. limited. Assuming that the first hand object is as shown by 31 in FIG. 3, it can be seen from FIG. 3 that the first hand object 31 is a gesture animation in which the five fingers of the virtual hand are opened and the fingers are slightly curved.
  • the hidden display rays that are emitted along the front of the finger are used to indicate the virtual item indicated by the virtual finger in the virtual scene.
  • the finger that emits a ray may be any one of any virtual hand in the virtual scene, which is not limited in this embodiment.
  • the finger that emits the ray is the index finger of the virtual right hand.
  • the rays may be respectively emitted along the front of a finger of a pair of virtual hands; or the rays may be respectively emitted along the front of a finger of the right virtual hand; or the rays may be respectively along The front of the plurality of fingers of a pair of virtual hands are respectively emitted, which is not limited in this embodiment.
  • a virtual item refers to an item that can interact with a virtual hand in a virtual environment.
  • the virtual item is usually a virtual item that the virtual hand can pick up, hold, and put down, such as a virtual weapon, a virtual fruit, a virtual tableware, and the like.
  • Hidden display means that the ray is logically present, but is not displayed to the user via the head mounted display.
  • the ray is explicitly displayed when the first hand object is displayed.
  • Step 202 When it is determined that the ray intersects the virtual item according to the input signal sent by the input device, displaying the second hand object, the second hand object includes a ray extending along the front of the finger and being explicitly displayed.
  • the virtual reality host After displaying the first-hand object through the head-mounted display, the virtual reality host needs to detect whether the ray intersects the virtual item according to the input signal sent by the input device; if intersected, it indicates that the first hand object exists in front of the virtual hand.
  • the virtual item that interacts displays the second hand object; if not, the virtual object that can interact with the virtual hand does not exist in front of the first hand object, and at this time, the first hand object remains unchanged, and Continue to receive input signals from the input device.
  • the virtual reality host detects whether the ray intersects the virtual item when receiving the input signal sent by the input device; or the virtual reality host detects whether the ray intersects the virtual item every predetermined time interval, the predetermined time interval is sent by the input device
  • the time interval of the input signal is long. This embodiment does not limit the timing at which the virtual reality host detects whether the ray intersects with the virtual item.
  • the virtual reality host detects whether the ray intersects the virtual item, including: the virtual reality host receives the input signal sent by the input device; determines the hand position of the first hand object in the virtual environment according to the input signal; and determines that the ray is virtual according to the hand position The position of the ray in the environment; detecting whether the position of the ray overlaps with the position of the item of the virtual item in the virtual environment; if the position of the ray overlaps with the position of the item, it is determined that the ray intersects the virtual item.
  • the input signal is a signal collected according to the motion condition of the real hand corresponding to the first hand object in the real environment, such as a sensor signal collected by the input signal through a motion sensor in the VR handle, and the input signal is used to represent the real At least one of a moving direction of the hand, a moving distance, a rotational angular velocity, a rotational direction, and a distance between each finger of the real hand and the input device.
  • the hand position refers to the corresponding coordinate set of the virtual hand in the virtual environment;
  • the ray position refers to the coordinate set corresponding to the hidden display ray in the virtual environment;
  • the item coordinate set refers to the corresponding coordinate set of the virtual item in the virtual environment.
  • the coordinate set is a three-dimensional coordinate set; when the virtual environment is a two-dimensional environment, the coordinate set is a two-dimensional coordinate set.
  • Whether the detected ray position overlaps with the position of the item of the virtual item in the virtual environment means that there is an intersection of the coordinate set corresponding to the ray position and the coordinate set corresponding to the item position.
  • the second hand object refers to a gesture animation of the virtual hand when indicating a certain virtual item.
  • the second hand type object can be represented by an animated straightness of the index finger and an animated gesture of the other finger gripping fists; or, by the forefinger and the thumb straightening, the other finger clenching gesture animation (gun type gesture) is used to represent
  • the embodiment does not limit the specific pose animation of the second hand object. Assuming that the second hand object is as shown by 41 in FIG. 4, as can be seen from FIG. 4, the second hand object 41 is an animation in which the index finger of the virtual hand is straight and the other fingers are clenched.
  • the second hand object includes rays that extend along the front of the finger and are displayed explicitly.
  • the explicitly displayed ray refers to a ray that extends along the front of the finger displayed by the user in a manner visible to the user when the second hand object is displayed.
  • the explicitly displayed ray extending from the finger of the second hand object may be an explicit display form of the ray of the first hand object extending along the front of the finger and hiding the display, ie, the second hand object
  • the ray is the same ray as the ray in the first hand object; or the ray in the second hand object may be regenerated by the VR host, which is not limited in this embodiment.
  • the finger extending from the ray in the second hand object is the same as the finger extending the ray in the first hand object.
  • the ray extending from the second hand object also intersects the virtual item.
  • the ray 42 extending from the second hand object 41 intersects the virtual item 43.
  • the head mounted display not only switches from the first hand object to the second hand object, but also displays the virtual item in a preset display manner, and the preset display mode is different from The original display of the virtual item. That is, the preset display mode is different from the original display mode for highlighting the virtual item.
  • the preset display mode includes: an enlarged display, a display in a predetermined color, a contour line display in a predetermined form, and the like, which is not limited in this embodiment. For example, in Figure 4, the virtual item 43 is shown in bold outline lines.
  • the ray extending along the front of the finger may be hidden, which is not limited in this embodiment.
  • the length of the above rays may be infinitely long, 10 meters, 2 meters, 1 meter or other length.
  • the length of the ray is configurable.
  • Step 203 When receiving the selection instruction, displaying the third hand object.
  • the selection instruction is generated by the input device according to the received user-triggered selection operation.
  • the selection operation may be an operation triggered by a user to set a physical button and/or a virtual button on the input device, or may be an operation of the user inputting a voice message, or may be an operation triggered by the finger bending data collected by the somatosensory glove. This embodiment does not limit this.
  • the third hand object is a hand object that holds a virtual item.
  • the third hand object does not include rays that extend along the front of the finger.
  • the virtual display host displays the second hand object 51 before the selection instruction is received.
  • the ray 52 extended by the second hand object 51 intersects with the virtual item "star”, and then receives a selection instruction.
  • the third hand object 53 is displayed to hold the "star" in the hand.
  • the gesture when the user grabs the item may be different for different shapes of the object, such as The user grabs the item as a pistol model, and the gesture of the user grasping the pistol model is a gesture of grabbing; for example, the item grabbed by the user is a pen, and the gesture of the user grabbing the pen is a gesture of holding the pen, therefore,
  • the head mounted display displays a third hand object corresponding to the virtual item of the type according to the type of the virtual item.
  • the third hand object corresponding to the spherical type displayed by the head mounted display is as shown by 61 in FIG. 6, according to FIG.
  • the three-handed object 61 is an animation of a posture in which the five fingers are bent along the wall of the ball.
  • the third hand object corresponding to the gun type displayed by the head mounted display is as shown by 71 in FIG. 7 , according to FIG. 7 .
  • the three-handed object 71 is an index animation for the index finger to hold the trigger, and the other index finger to hold the gun body.
  • the third hand object corresponding to the pen type displayed by the head mounted display is as shown by 81 in FIG. 8 , according to FIG. 8 .
  • the three-handed object 81 is an animation of a posture in which the index finger, the thumb, and the middle finger pinch the pen tube and the other fingers are bent.
  • the third hand object corresponding to the rod type displayed by the head mounted display is as shown by 91 in FIG. 9 , according to FIG. 9 .
  • the three-handed object 91 is a pose animation in which each finger is bent around a virtual item.
  • the third hand object can also be a gesture animation corresponding to other types of virtual items, and the embodiment is not enumerated here.
  • the hand display method for the virtual reality scene displays by displaying when the ray of the first hand object extending along the front of the finger and hiding the displayed ray intersects with the virtual item in the virtual environment.
  • a second hand object and displaying a ray extending along the front of the finger in the second hand object, so that the user can know which virtual item is indicated by the second hand object; when receiving the selection instruction, displaying the third hand
  • the type object enables the user to know that the virtual item has been captured in the virtual environment; it can solve the problem that only the virtual hand can be grasped by the virtual hand in the VR system, and the virtual item can be grasped, thereby causing the virtual item to be captured at a long distance.
  • the problem of inefficiency since the virtual hand can capture virtual items by rays extending along the front of the finger, the function of capturing long-distance virtual items in the VR system can be realized.
  • the virtual reality host since the virtual reality host does not need to input an operation instruction by the input device, whether the hand type display method in this embodiment can be applied to most types can be determined according to whether the ray and the virtual item intersect each other.
  • the input device can expand the application range of the hand display method.
  • step 203 when the virtual reality host receives the placement instruction sent by the input device, the first hand object is displayed through the head mounted display.
  • the placement instruction is generated by the input device according to the received user-triggered placement operation.
  • the operation mode of the placement operation is different from the operation mode of the selection operation.
  • the placement operation may be an operation of a physical button and/or a virtual button set by the user on the input device, or an operation of inputting a voice message by the user. This example does not limit this.
  • the placement instruction is used to instruct the virtual reality host to place the virtual item held by the third hand object into a position where the virtual item is vertically projected in the virtual environment, that is, the placement process of the head mounted display. The process of falling vertically for virtual items.
  • the virtual display host displays the third hand object 1001 before the placement command is received.
  • the third hand object 1001 holds the “star” in the hand, and the virtual display host receives the placement instruction.
  • the "star” falls vertically to position 1002 and displays the first hand object 1003.
  • the placing instruction is used to instruct the virtual reality host to place the virtual item held by the third hand object in a position before the virtual item is selected, that is, the placement of the head mounted display.
  • the process is the process of throwing a virtual item to its original location.
  • the virtual display host displays the third hand object 1101 before receiving the placement instruction, and the third hand object 1101 holds the "star” in the hand, when the virtual display host receives the placement instruction.
  • the "star” is thrown to the original position 1102 and displays the first hand object 1103.
  • the input device may receive a user-triggered user operation on a predetermined button, the user operation including one of a pressing operation and a releasing operation.
  • the predetermined button is a physical button or a virtual button that is disposed on the input device, which is not limited in this implementation.
  • the virtual reality host transmits a pressing instruction generated according to the pressing operation, the pressing instruction carrying the identifier of the predetermined button.
  • the virtual reality host receives the pressing instruction corresponding to the predetermined button, the fourth hand object is displayed, and the fourth hand object is a posture animation in which the finger corresponding to the predetermined button is in a curved state.
  • the fourth hand object corresponding to the identifier of each predetermined button is prestored in the virtual reality host, so that, according to the identifier of the predetermined button in the received pressing instruction, the fourth corresponding to the identifier of the predetermined button may be determined.
  • the hand object is displayed to display the fourth hand object.
  • the input device if the index finger of the user presses the predetermined button 122 corresponding to the position of the index finger on the input device 121, the input device generates a pressing command and sends the pressing command to the virtual reality host 123, and the pressing command carries The identifier of the predetermined button 122 is received; after receiving the pressing command corresponding to the predetermined button 122, the virtual reality host 123 displays the fourth hand object 124 corresponding to the identifier of the predetermined button 122 through the head mounted display.
  • the head mounted display shows the fourth hand object
  • the release command generated according to the release operation is sent to the virtual reality host, and the release command carries the release command The name of the predetermined button.
  • the virtual reality host receives the release instruction corresponding to the predetermined button
  • the fifth hand object is displayed, and the fifth hand object is a hand object in which the finger corresponding to the predetermined button is in a stretched state.
  • the fifth hand object corresponding to the identifier of each predetermined button is prestored in the virtual reality host, so that, according to the identifier of the predetermined button in the received pressing command, the fifth corresponding to the identifier of the predetermined button may be determined.
  • the hand object is displayed to display the fifth hand object.
  • the predetermined button 132 is released; then the input device generates a release command, and the release command is Sended to the virtual reality host 133, the release command carries the identifier of the predetermined button 132; after receiving the release command corresponding to the predetermined button 132, the virtual reality host 133 displays the fifth corresponding to the identifier of the predetermined button 132 through the head mounted display. Hand object 134.
  • the fifth hand object may be the same as the first hand type object, or may be different from the first hand type object, which is not limited in this embodiment.
  • the virtual reality host further needs to detect, according to the input signal sent by the input device, whether the first hand object intersects the virtual item; if intersected, the first hand object contacts the virtual hand.
  • the interactive virtual item At this time, the virtual reality host controls the head mounted display to directly display the third hand type object, skipping the process of displaying the second hand type object; if not, the first hand type object is not contacted with The virtual hand interacts with the virtual item.
  • the first hand object remains unchanged, and continues to receive the input signal sent by the input device; or, step 202 is performed.
  • the virtual reality host may detect whether the first hand object intersects the virtual item when receiving the input signal sent by the input device; or the virtual reality host may detect whether the first hand object and the virtual item are at predetermined time intervals Intersect, the predetermined time interval is longer than the time interval at which the input device sends the input signal. This embodiment does not limit the timing at which the virtual reality host detects whether the first hand object intersects with the virtual item.
  • the virtual reality host detects whether the first hand object intersects with the virtual item, and includes: receiving an input signal sent by the input device, where the input signal is a signal collected according to a motion situation of the real hand corresponding to the first hand object in the real environment; Determining, according to the input signal, a hand position of the first hand object in the virtual environment; detecting whether the hand position overlaps with the position of the virtual item in the virtual environment; if the hand position overlaps with the item position, determining the first hand The type object intersects with the virtual item.
  • Detecting whether the hand position overlaps with the item position of the virtual item in the virtual environment means: detecting whether there is an intersection between the coordinate set corresponding to the hand position and the coordinate set corresponding to the item position.
  • the head mounted display displays the third hand object 1403.
  • the VR system starts up and displays the first hand object, and there are three hand object switching logics, which are respectively: the first type, when the virtual hand touches the virtual item in the virtual environment, the first hand The type object switches to the third hand object; secondly, when the ray intersects the virtual item in the virtual environment, the first hand object switches to the second hand object; third, the virtual hand has neither contact in the virtual environment When the virtual item does not intersect the virtual item, the first hand object remains unchanged.
  • the head-mounted display needs to determine whether to switch the hand-shaped object through the virtual reality host before displaying the virtual hand in each frame of the virtual environment.
  • the virtual reality host is pre-set with three determination priorities of the switching logic, and sequentially determines whether to switch the hand object according to the order of priority from high to low. This embodiment does not limit the manner in which the priority is set.
  • FIG. 15 is a flowchart of a hand display method for a virtual reality scene provided by another embodiment of the present application. This embodiment is illustrated by using the hand display method for the virtual reality scene in the VR system shown in FIG. 1 . Assume that the preset priority in the virtual reality host is: the first type of switching logic is higher than the second type of switching logic is higher than the third type of switching logic. After step 201, the method further includes the following steps:
  • step 1501 it is detected whether the current hand object intersects with the virtual item. If yes, step 1502 is performed; if not, step 1506 is performed.
  • the virtual reality host detects the coordinates of the current hand object in the three-dimensional virtual space, and whether the coordinates of the virtual object in the three-dimensional virtual space intersect, and if there is an intersection, the current hand object intersects with the virtual item; If there is an intersection, it means that the current hand object does not intersect with the virtual item.
  • the current hand object may be the default first hand object, or may be a gesture object in the virtual environment of the previous frame, which is not limited in this embodiment.
  • Step 1502 it is detected whether a selection instruction is received, and if yes, step 1503 is performed; if not, step 1504 is performed.
  • Step 1503 displaying a third hand object.
  • Step 1504 it is detected whether a placement instruction is received, and if yes, step 1514 is performed; if not, step 1505 is performed.
  • step 1505 if the current hand object is the first hand object or the second hand object, it is detected whether the ray extending along the front of the finger in the current hand object intersects with the virtual item, and if yes, step 1506 is performed; If no, go to step 1510.
  • the virtual reality host detects whether the coordinates of the ray in the first hand object or the second hand object in the three-dimensional virtual space intersect with the coordinates of the virtual object in the three-dimensional virtual space, and if there is an intersection, the current hand is displayed.
  • the ray extending along the front of the finger in the type object intersects the virtual item; if there is no intersection, it means that the ray extending along the front of the finger in the current hand object does not intersect the virtual item.
  • the ray is hidden; if the current hand object is the second hand object, the ray is explicitly displayed.
  • Step 1506 detecting whether a selection instruction is received, and if yes, executing step 1507; if not, executing step 1508.
  • step 1507 the third hand object is displayed, and step 1509 is performed.
  • Step 1508 displaying a second hand object.
  • Step 1509 it is detected whether a placement instruction is received, and if yes, step 1514 is performed; if not, step 1510 is performed.
  • step 1510 it is detected whether a press command is received, and if so, step 1511 is performed; if not, step 1512 is performed.
  • step 1511 the fourth hand object is displayed.
  • step 1512 it is detected whether a release command is received, and if so, step 1514 is performed; if not, step 1513 is performed.
  • step 1513 the fifth hand object is displayed.
  • the first hand object is displayed.
  • the virtual reality host needs to perform the above steps when the head mounted display displays each frame of the virtual environment.
  • the head-mounted display can also directly perform step 1501 without performing step 201, which is not limited in this embodiment.
  • an object processing method in a virtual scene is further provided.
  • the method can be applied to a hardware environment composed of a server 1602 and a terminal 1604 as shown in FIG.
  • the server 1602 is connected to the terminal 1604 through a network.
  • the network includes but is not limited to a wide area network, a metropolitan area network, or a local area network.
  • the terminal 1604 is not limited to a PC, a mobile phone, a tablet, or the like.
  • the object processing method in the virtual scenario of the embodiment of the present application may be executed by the server 1602, may be executed by the terminal 1604, or may be jointly executed by the server 1602 and the terminal 1604.
  • the object processing method in the virtual scenario in which the terminal 1604 executes the embodiment of the present application may also be performed by a client installed thereon.
  • server 1602 and terminal 1604 may be collectively referred to as a virtual reality host.
  • FIG. 17 is a flowchart of an object processing method in an optional virtual scenario according to an embodiment of the present application. As shown in FIG. 17, the method may include the following steps:
  • Step S1702 detecting a first operation performed on the first target object in the real scene
  • Step S1704 generating, according to the first operation, at least one first menu object corresponding to the second target object in the virtual scene, where the second target object is a virtual object corresponding to the first target object in the virtual scene;
  • Step S1706 detecting a second operation performed on the first target object, where the second operation is used to indicate that the second target object is moved to a position where the target menu object in the at least one first menu object is located in the virtual scene;
  • Step S1708 performing a target processing operation in the virtual scene in response to the second operation, wherein the target processing operation is a processing operation corresponding to the target menu object, and each of the at least one first menu object corresponds to a processing operation .
  • Step S1702 to step S1708 by detecting a first operation performed on the first target object, and then generating, according to the detected first operation, a plurality of second target objects corresponding to the first target object in the virtual scene.
  • a first menu object detecting a second operation performed on the first target object, and indicating, according to the detected second operation, that the second target object in the virtual scene moves to a position of the target menu object in the first menu object,
  • the target processing operation is performed in the virtual scene, thereby performing operations on the 3D space coordinates to be converted into 2D spatial positions without simulating the mouse, and the related technology can be solved.
  • the first target object may be a device object for controlling the virtual scene in the real scene, for example, the first target object may be a game controller, a remote controller, or the like in the real scene.
  • the first target object may also be referred to as an input device.
  • the user may perform a first operation on the first target object, wherein the first operation may include, but is not limited to, clicking, long pressing, gesture, shaking, and the like.
  • the embodiment of the present application may obtain a control instruction corresponding to the first operation by detecting a first operation performed by the user on the first target object in the real scene, where the control instruction may be used to control the virtual scenario.
  • the control instruction may be used to control the virtual scenario.
  • the user can press a button of the game controller to control the generation of menu options in the virtual game screen.
  • the user can press a case in the remote control to control the playback of the virtual video screen.
  • the embodiment of the present application can detect the first operation performed on the first target object in the real scene in real time, so that the first operation can be responded to in a timely manner, and the object in the virtual scene can be processed.
  • the operation is more timely to improve the user experience of the virtual scene.
  • the first operation performed on the first target object in the real scene may be detected every other period of time, so that the processing resource of the virtual reality host may be saved.
  • the second target object may be a virtual object corresponding to the first target object in the real scene in the virtual scene.
  • the second target object may be a game handle in the virtual scene.
  • the position of the gamepad in the virtual scene in the virtual scene may correspond to the position of the gamepad in the real scene in the real scene.
  • the gamepad in the virtual scene may also Move with it, and the moving direction and moving distance are the same as the moving direction and moving distance of the game controller in the real scene.
  • the second target object may be a virtual hand in the virtual scene.
  • the position of the virtual hand in the virtual scene in the virtual scene may correspond to the position of the game handle in the real scene in the real scene.
  • the virtual hand in the virtual scene may also Move with it, and the moving direction and moving distance are the same as the moving direction and moving distance of the game controller in the real scene.
  • the embodiment of the present application may respond to the first operation, where the response process may include generating at least one first menu object corresponding to the second target object in the virtual scene, where
  • the first menu object may be a virtual menu for controlling the virtual scene. For example, after the user presses the menu control button of the game controller in the real scene, the game controller in the virtual scene also performs the operation of pressing the menu control button, and then in the virtual scene, the menu can be generated in the virtual scene. For the user to select the menu object to achieve the function corresponding to the menu object.
  • the function corresponding to the first menu object is not specifically limited.
  • the first menu object may generate a pull-down menu correspondingly, or may perform a certain action correspondingly, or complete a certain task.
  • the step S1704 in response to the first operation, generating the at least one first menu object corresponding to the second target object in the virtual scene may include the following steps:
  • Step S17042 Acquire a current target scene in the virtual scene when the first operation is detected.
  • Step S17044 Generate at least one first menu object corresponding to the target scene around the second target object in the virtual scene according to the correspondence between the predetermined virtual scene and the menu object.
  • the process of responding to the first operation may include: acquiring the current target scene in the virtual scene when the first operation is received, and then following the predetermined target scene and menu.
  • the process of responding to the first operation may include: acquiring the current target scene in the virtual scene when the first operation is received, and then following the predetermined target scene and menu.
  • Corresponding relationship of the object determining a menu object corresponding to the target scene, that is, the virtual scene is currently the target scene when the first operation is detected, and determining that the menu object corresponding to the target scene is the first menu object according to a predetermined relationship
  • a first menu object corresponding to the target scene may be generated around the second target object for selection by the user in the virtual scene, so that the user may select a corresponding menu option according to the generated first menu object.
  • menu objects corresponding to different target scenes may be generated in different target scenes.
  • the generated corresponding menu object may be a weapon equipment selection menu; in a fighting game, the generated corresponding The menu object can be a skill selection menu.
  • the menu objects corresponding to other target scenes are not illustrated here.
  • the manner of generating the at least one first menu object in the vicinity of the second target object in the virtual scene in response to the first operation is not specifically limited in the embodiment of the present application, and the first menu object is in the first
  • the arrangement around the two target objects may include at least one of the following:
  • the predetermined circumference may be a circumference formed by a position at which the second target object is at a center and the predetermined distance is a radius.
  • the predetermined distance and the predetermined interval may be set according to actual needs, and are not specifically limited herein.
  • the predetermined direction comprises at least one of: upper, lower, left, etc., right
  • the predetermined arrangement order includes the following At least one of: a linear arrangement order, a curve arrangement order, and the like.
  • a plurality of first menu objects may be uniformly arranged around the second target object in a virtual scene according to a predetermined circumference, wherein each of the first A predetermined distance between a menu object and the second target object is a radius of the predetermined circumference, and the adjacent two first menu objects are arranged at a predetermined interval.
  • the first menu object may be arranged in a straight line or a curved shape in a direction above, below, to the left, and to the right of the second target object.
  • the scheme arranges a plurality of first menu objects around the second target object, so that the user can conveniently control the direction of the second target object in the direction of the first menu object to be selected, when the second target object is observed. Move to complete the selection of the menu object.
  • the first menu object may be arranged around the second target object using a pattern of 3D spheres; the first menu object may also be arranged around the second target object using a pattern of 3D squares, and the first menu object may also be used.
  • Other different styles are arranged around the second target object, which are not illustrated here.
  • the first menu object is arranged around the second target object in a circular form, and may also be arranged in other forms, and is not illustrated here.
  • the embodiment of the present application may also detect, in real time, the second operation performed by the user on the first target object in the real scene, where The second operation may include, but is not limited to, operations such as moving, sliding, and the like.
  • the embodiment of the present application may further detect, in a real scene, the second operation performed by the user on the first target object.
  • the second operation may be other operations performed after the user performs the first operation on the gamepad, for example, in the case that the first operation is to press and hold the gamepad button, the second operation is to press and hold the gamepad to move .
  • the second operation is to press and hold the gamepad to move .
  • the embodiment of the present application may respond to the second operation, and specifically may include controlling, in the virtual scenario, the movement of the second target object according to the first target object in the real scene.
  • the direction and the moving distance are moved such that the second target object can be moved to a position where the target menu object in the virtual scene is located, wherein the target menu object can be in at least one first menu object around the second target object in the virtual scene
  • the user can control the second target object in the virtual scene to move to a certain target menu object of the plurality of first menu objects under the control of the second operation by performing a second operation on the first target object.
  • the user in a case where the user needs to select a certain target menu object, the user holds the game handle in the real scene, and the game handle in the virtual scene also moves in the corresponding direction, and the user controls the moving direction of the game controller in the real scene.
  • the game handle in the virtual scene is moved to the target menu object to be selected, and the selection of the target menu object is completed.
  • the embodiment of the present application may control the second target object in the virtual scene to move to the target menu object in response to the second operation, and trigger the target menu object correspondingly when the second target object moves to the target menu object.
  • the target processing operation each of the at least one first menu object arranged around the second target object in the virtual scene corresponds to a processing operation, wherein the processing operation corresponding to the target menu object is a target processing operation.
  • the processing operations corresponding to the first menu object may include, but are not limited to, generating a drop-down menu object of the first menu object, implementing a certain function, and the like.
  • a virtual scene is a shooting game environment, in which the user controls the virtual game controller to move to the position of the target menu object of the magazine, and responds to the target menu of the user-selected magazine.
  • the target processing operation corresponding to the object controls the game character in the virtual scene to replace the weapon magazine and performs the target processing operation.
  • the target processing operation may include multiple types, for example, executing a function corresponding to the menu object, or triggering a pull-down menu for generating the menu option, and the target processing operation further includes multiple implementation manners, which are not examples. Description.
  • step S1708 performing the target processing operation in the virtual scenario in response to the second operation may include at least one of the following:
  • At least one second menu object is generated in the virtual scene, wherein the at least one second menu object is a drop down menu object of the target menu object.
  • Switching the first scene in the virtual scene to the second scene such as switching of the game scene.
  • Controlling an operation object in a virtual scene to perform a target task for example, a game character performs a monster task.
  • the target processing operation is not limited to the foregoing operations, and the target processing operations may also include other operations, which are not illustrated herein.
  • the target processing operation in the virtual scene, the target processing operation is performed in response to the second operation, and different target processing operations may be selected according to the user's needs and functions represented by different first menu objects, and the virtual scene may be The menu can meet a variety of usage needs.
  • the embodiment may further include: detecting that the first target object is executed. Three operations; deleting at least one first menu object in the virtual scene in response to the third operation.
  • the user may also control the first target object to perform a third operation, where the third operation may include, but is not limited to, releasing Button, click, long press or move.
  • the embodiment of the present application may control the second target object in the virtual scene to perform the third operation correspondingly, and delete the first menu object in the virtual scene, that is, in the The menu content is cancelled in the virtual scene.
  • the user controls the gamepad in the real scene to perform a third operation, such as pressing a button in the gamepad, or shaking a certain one on the gamepad If the joystick is used, or the case on the gamepad is released, the gamepad in the virtual scene also performs a corresponding operation, and the plurality of first menu objects are deleted in the virtual scene, so that the menu content is cancelled in the virtual scene.
  • a third operation such as pressing a button in the gamepad, or shaking a certain one on the gamepad
  • the embodiment may further include: setting a flag for the target menu object in the virtual scene in response to the second operation, where The tag is used to indicate that the second target object is moved to the location where the target menu object is located.
  • the mark can be set on the target menu object, so that the second target object in the virtual scene can be clearly seen by the user in the process of moving to the position of the target menu object, and the mark is guided. It can be moved more easily.
  • the user can control the game handle in the virtual scene to point to a target menu object, and the target menu object will be magnified, illuminated, flashed, or rotated by the mark.
  • the target menu object will have other expressions under the action of the mark, which will not be exemplified here.
  • the present application also provides an embodiment, which provides an interaction scheme of a hand menu in a virtual reality environment.
  • the application scenario described in this application is in a VR environment.
  • the menu is created using a 2D interface.
  • the reason for this is that in the case where the final display is a 2D display, the menu is not used as the content that should exist in the game scene, but as a connection medium for the user and the game content, the menu is created using the 2D interface, and the menu 2D can be directly made.
  • the panel faces the display direction of the display (that is, the direction of the player's camera in the virtual world), so that the user can select more quickly and conveniently, and does not affect the game logic of the virtual world, making the menu relatively independent.
  • the user interacts with the host, it is no longer mapped to the 3D space by the mouse and keyboard through the position change of the 2D interface to operate the virtual world, but directly obtains the user's position in the real 3D space. According to this position, it directly corresponds to the 3D position of the virtual space, so that there is no corresponding relationship between the original mouse operation, such as 2D screen space - 3D virtual space.
  • This application primarily describes the performance and logic implementation.
  • each option of the menu appears as a 3D object around the hand according to a certain arrangement algorithm, and then the user triggers the menu option through a predefined behavior, and the selected menu item triggers the corresponding function, and The entire menu disappears after the trigger.
  • the present application provides a hand 3D object menu in a VR environment.
  • each option in the menu appears as a 3D object around the hand according to a certain arrangement algorithm, and then the user triggers the menu option through a predefined behavior, and the selected menu item triggers the corresponding function. And the entire menu disappears after the trigger.
  • the hand 3D object menu provided by the present application can be used as a shortcut menu in a VR application, especially in a specific game scene, each menu option can be truly present in the game scene, so that the user does not appear because of the menu. Affecting the immersive experience of the game, the user can quickly select the corresponding function.
  • FIG. 18 is a schematic diagram of an optional hand menu in a virtual reality environment according to an embodiment of the present application.
  • a self-made handle model is shown, which exists in a virtual 3D space of a game environment.
  • the handle is in one-to-one correspondence with the handle in the virtual space position in the virtual space position.
  • the user controls the handles in the virtual space by controlling the handles in the real space in the hand.
  • the gamepad can be a Vive handle, an Oculus Touch handle, or a corresponding two-handed VR handle.
  • the handle is in any position in the virtual space, when the user presses the menu button of the physical handle (may be in the case of pressing the Menu button in the Vive handle, or pressing the Aculus Touch handle in the A/ In the case of the X button, the menu object that needs to be popped up in the virtual space is born from the position of the handle, and then moved to the predetermined calculated target position.
  • the target location may be a relative location around the location of the handle.
  • FIG. 19 is a schematic diagram of another optional hand menu in a virtual reality environment according to an embodiment of the present application.
  • the target position may take the position of the instant handle of the button in the virtual space as the original position.
  • a circular plane is established, and a plurality of selection objects are arranged at equal intervals around the circular plane, wherein the normal vector of the circular plane faces the camera of the virtual world, that is, the circular plane The normal vector is oriented toward the direction the user is viewing.
  • the handle of the virtual space is a 3D model, and each menu option appearing around the handle is also a 3D model.
  • the handle position of the virtual space corresponds to the handle position of the real space, and the position of the menu option in the virtual space also corresponds to the position of the real space.
  • the user is required to keep the menu key pressed, and if the user releases the menu key, the entire menu interaction process ends.
  • the position of the handle in the virtual space in the virtual space is in one-to-one correspondence with the handle in the real space in the real space position in any time period. Therefore, in the case where these virtual menu options appear, setting the user handle to touch these virtual menu objects is to trigger the function corresponding to the labeling of these menu options.
  • the menu function is triggered when the movement position of the handle collides with the menu option.
  • FIG. 20 is a schematic diagram of an optional hand menu in a virtual reality environment according to an embodiment of the present application.
  • the handle may also be designed such that the size of the menu option follows the handle and menu options. The distance becomes larger and larger, so that the size of the menu option becomes smaller as the distance between the handle and the menu option becomes smaller, so that the user can be guided to touch these menu options according to the prompt.
  • the function corresponding to the menu is triggered, and the function may be the set game logic and the close menu, or may be the menu of the next level.
  • FIG. 21 is a schematic diagram of an optional menu control logic according to an embodiment of the present application.
  • [Single Layer Menu Logic] is entered, according to [Single Layer Menu Logic] ⁇ Execute the corresponding menu logic, and perform [Feedback] on the result of the execution. Then, according to the result of [Feedback], choose to execute [Menu Disappear] or enable [Secondary Menu]. If [Secondary Menu] is executed, return to [ Single-level menu logic] Executes the second menu logic.
  • the manner of opening the menu interaction may include the manners of [menu trigger] and [secondary menu], [menu trigger] indicates that the user presses the menu button; [second level menu] indicates that the menu option triggered by the user is further turned on.
  • a new secondary menu option in which case the previous menu option disappears to complete the previous menu interaction, while generating a new menu option opens the current menu interaction.
  • the manner of closing the menu interaction may also include the manners of [menu trigger] and [menu disappearing], and the [menu trigger] indicates that the user moves the handle to touch one of the menu options, in which case Whether it is to open the next level menu or execute the preset logic of the menu option, the first step before this is to destroy all the menu options of this time.
  • [Menu Disappear] means that the user did not collide with any of the options in this menu interaction, but released the menu button, which will trigger the end of the current [single layer menu logic] behavior.
  • the [single layer menu logic] includes an initialization phase and an execution phase.
  • determining the content of the hand menu currently to be popped according to the current game environment may be implemented by storing a variable in the hand menu to identify the type of the current menu, and predefining the hand for each type. The menu needs to pop up. In the case where the user presses the menu button, the current game environment is checked, and the value of this menu type variable is determined as a parameter for creating the current menu option, thereby implementing [menu content generation].
  • the second step of initialization is [determine the menu space position], and the implementation method may be based on the current [handle position], and all the menu options are arranged by the algorithm around the handle position at the moment of pressing the menu.
  • the algorithm may be: taking the position of the instantaneous handle of the button in the virtual space as the original position, and using the original position as the center of the circle, with a radius of 20 cm, establishing a circular plane, and surrounding the circular plane according to the circular plane A plurality of selection objects are arranged at equal intervals, wherein the normal vector of the circular plane faces the camera of the virtual world, that is, the normal vector of the circular plane faces the direction viewed by the user. And store the collision body and location of each menu option.
  • the logic to be executed for each frame of the virtual space is: obtain the [handle position] of the current virtual handle, and then proceed to the next step to determine whether the [handle position] of the current virtual handle satisfies the end condition, if If the end condition is not met, continue with this step.
  • the end condition may include the following situations, for example, the user releases the menu button; the [handle position] of the current virtual handle collides with any one of the menu options. As long as any of the above conditions are met, the current menu interaction is ended, and the [single layer logic menu] is completed.
  • the above embodiment of the present application can be more suitable for the user in the display, touch the virtual object object through the virtual hand and give you feedback of the visual and auditory touch, so that the user's feeling is more real in the VR environment.
  • the user can quickly select a menu in the VR environment.
  • the solution provided by the present application can be supported.
  • the menu button can be adjusted at will.
  • the algorithm for the location of the menu option may be various, and the algorithm in the above embodiment may be used to more closely match the user's selection habit.
  • the scheme in which the button is always pressed to display the menu may be implemented in other manners. For example, pressing the button starts to perform menu interaction, and pressing the button again ends the menu interaction.
  • the menu reaction animation during the movement of the handle is also diverse, and will not be repeated here.
  • the method according to the above embodiment can be implemented by means of software plus a necessary general hardware platform, and of course, by hardware, but in many cases, the former is A better implementation.
  • the technical solution of the present application which is essential or contributes to the prior art, may be embodied in the form of a software product stored in a storage medium (such as ROM/RAM, disk,
  • the optical disc includes a plurality of instructions for causing a terminal device (which may be a mobile phone, a computer, a server, or a network device, etc.) to execute an object processing method in a virtual scene according to various embodiments of the present application.
  • FIG. 22 shows a block diagram of a hand type display device for use in a virtual reality scene according to an embodiment of the present application.
  • the hand-type display device used in the virtual reality scene can be implemented as a whole or a part of the VR device by software, hardware or a combination of the two.
  • the hand-type display device used in the virtual reality scene includes: the first display module 2210 The second display module 2220 and the third display module 2230.
  • the first display module 2210 is configured to implement the display function related to step 201;
  • the second display module 2220 is configured to implement the display function related to step 202;
  • the third display module 2230 is configured to implement the display function related to step 203.
  • the device further includes: a first receiving module, a first determining module, a second determining module, a first detecting module, and a third determining module.
  • a first receiving module configured to receive an input signal sent by the input device, where the input signal is a signal collected according to a motion situation of a real hand corresponding to the first hand object in a real environment;
  • a first determining module configured to determine, according to the input signal, a hand shape position of the first hand object in the virtual environment
  • a second determining module configured to determine a ray position of the ray in the virtual environment according to the hand position
  • a first detecting module configured to detect whether a ray position overlaps with an item position of the virtual item in the virtual environment
  • a third determining module configured to determine that the ray intersects the virtual item when the ray position overlaps with the item position.
  • the third display module 2230 is further configured to display the third hand object when the first hand object intersects the virtual item and receives the selection instruction.
  • the device further includes: a second receiving module, a fourth determining module, a second detecting module, and a fifth determining module.
  • a second receiving module configured to receive an input signal sent by the input device, where the input signal is a signal collected according to a motion situation of a real hand corresponding to the first hand object in a real environment;
  • a fourth determining module configured to determine, according to the input signal, a hand shape position of the first hand object in the virtual environment
  • a second detecting module configured to detect whether the hand position overlaps with the position of the item of the virtual item in the virtual environment
  • the fifth determining module is configured to determine that the first hand object intersects the virtual item when the hand position overlaps with the item position.
  • the third display module 2230 is further configured to: display, according to the type of the virtual item, a third hand object corresponding to the type.
  • the device further includes: a fourth display module.
  • a fourth display module configured to display the virtual item in a preset display manner when determining that the ray intersects the virtual item according to the input signal sent by the input device.
  • the first display module 2210 is further configured to: when the placement instruction is received, display the first hand object.
  • the device further includes: a fifth display module.
  • the fifth display module is configured to display a fourth hand object when the pressing instruction corresponding to the predetermined button is received, where the fourth hand object is a hand object in which the finger corresponding to the predetermined button is in a curved state.
  • the device further includes: a sixth display module.
  • the sixth display module is configured to display a fifth hand object when the release command corresponding to the predetermined button is received, and the fifth hand object is a hand object in which the finger corresponding to the predetermined button is in a stretch state.
  • FIG. 23 is a schematic diagram of an object processing apparatus in an optional virtual scenario according to an embodiment of the present application. As shown in FIG. 23, the apparatus may include:
  • a first detecting unit 231 configured to detect a first operation performed on the first target object in the real scene
  • the first response unit 233 configured to generate at least one corresponding to the second target object in the virtual scene in response to the first operation a first menu object, wherein the second target object is a virtual object corresponding to the first target object in the virtual scene
  • the second detecting unit 235 is configured to detect a second operation performed on the first target object, wherein the second The operation is for indicating that the second target object is moved to the location of the target menu object in the at least one first menu object in the virtual scene
  • the second response unit 237 is configured to perform the target processing in the virtual scene in response to the second operation The operation, wherein the target processing operation is a processing operation corresponding to the target menu object, and each of the at least one first menu object corresponds to a processing operation.
  • first detecting unit 231 in this embodiment may be used to perform step S1702 in the embodiment of the present application.
  • the first response unit 233 in this embodiment may be used to perform step S1704 in the embodiment of the present application.
  • the second detecting unit 235 in this embodiment may be used to perform step S1706 in the embodiment of the present application.
  • the second response unit 237 in this embodiment may be used to perform step S1708 in the embodiment of the present application.
  • the foregoing modules are the same as the examples and application scenarios implemented by the corresponding steps, but are not limited to the contents disclosed in the foregoing embodiments. It should be noted that the foregoing module may be implemented in a hardware environment as shown in FIG. 16 as a part of the device, and may be implemented by software or by hardware.
  • the embodiment may further include: a third detecting unit 238, configured to generate at least one corresponding to the second target object in the virtual scenario in response to the first operation.
  • a third object operation performed on the first target object is detected after a menu object; the third response unit 239 deletes the at least one first menu object in the virtual scene in response to the third operation.
  • the embodiment may further include: a fourth response unit 236, configured to respond to the second before performing the target processing operation in the virtual scenario in response to the second operation The operation sets a flag for the target menu object in the virtual scene, wherein the flag is used to indicate that the second target object is moved to the location where the target menu object is located.
  • the first response unit 233 may include: an obtaining module 2331, configured to acquire a current target scene in the virtual scene when the first operation is detected; and a first generating module 2332, And generating at least one first menu object corresponding to the target scene around the second target object in the virtual scene according to the correspondence between the predetermined virtual scene and the menu object.
  • the first response unit 233 may include at least one of the following modules: a second generation module 2333, configured to generate at least one first at predetermined intervals on a predetermined circumference.
  • a menu object wherein the predetermined circumference is a circumference formed by the position of the second target object, and the predetermined distance is a circle; and the third generation module 2334 is configured to generate at least a predetermined arrangement order in the predetermined direction of the second target object.
  • a first menu object wherein the predetermined direction comprises at least one of: upper, lower, left, and right, and the predetermined arrangement order includes at least one of the following: a linear arrangement order, and a curve arrangement order.
  • the second response unit 237 may include at least one of the following modules: a fourth generation module 2351, configured to generate at least one second menu object in the virtual scene, The at least one second menu object is a drop-down menu object of the target menu object; the switching module 2352 is configured to switch the first scene in the virtual scene to the second scene; and the setting module 2353 is configured to operate the object in the virtual scene.
  • the attribute is set as the target attribute; the control module 2354 is configured to control the operation object in the virtual scene to perform the target task.
  • the module by detecting a first operation performed on the first target object, and then generating a plurality of first menu objects around the second target object corresponding to the first target object in the virtual scene according to the detected first operation, and then Detecting a second operation performed on the first target object, and indicating, according to the detected second operation, that the second target object in the virtual scene moves to a position of the target menu object in the first menu object, and the second in the virtual scene
  • the target processing operation is performed in the virtual scene, so that the operation is performed by converting the 3D space coordinates into the 2D spatial position without simulating the mouse, and the related technology adopts the method of emitting rays.
  • the menu options in the 2D menu panel in the virtual scene lead to technical problems in the menu selection operation in the virtual scene, thereby achieving the technique of directly performing operations using 3D space coordinates, thereby making the menu selection operation in the virtual scene easier. effect.
  • FIG. 29 is a schematic structural diagram of a VR system according to an embodiment of the present application.
  • the VR system includes a head mounted display 120, a virtual reality host 140, and an input device 160.
  • the head mounted display 120 is a display for wearing an image display on the user's head.
  • the head mounted display 120 is electrically coupled to the virtual reality host 140 via a flexible circuit board or hardware interface.
  • the virtual reality host 140 is typically integrated within the head mounted display 120.
  • the virtual reality host 140 includes a processor 142 and a memory 144.
  • Memory 144 is a volatile and nonvolatile, removable and non-removable medium, such as RAM, ROM, implemented by any method or technology for storing information such as computer readable instructions, data structures, program modules or other data.
  • the memory 144 stores one or more program instructions, including instructions for implementing the hand-type display method for the virtual reality scene provided by the foregoing method embodiments; or, implementing the various method embodiments described above The instructions of the object processing method in the provided virtual scene.
  • the processor 142 is configured to execute the instructions in the memory 144 to implement the hand-type display method for the virtual reality scene provided by the foregoing method embodiments; or implement the objects in the virtual scene provided by the foregoing method embodiments.
  • the virtual reality host 140 is connected to the input device 160 via a cable, Bluetooth connection, or Wi-Fi (Wireless-Fidelity) connection.
  • the input device 160 is an input peripheral such as a somatosensory glove, a somatosensory handle, a remote controller, a treadmill, a mouse, a keyboard, and a human eye focusing device.
  • a somatosensory glove such as a somatosensory glove, a somatosensory handle, a remote controller, a treadmill, a mouse, a keyboard, and a human eye focusing device.
  • the present application provides a computer readable storage medium having at least one instruction stored therein, the at least one instruction being loaded and executed by the processor to implement the virtual reality scene provided by the foregoing method embodiments.
  • the hand display method in the middle.
  • the present application also provides a computer program product that, when executed on a computer, causes the computer to execute the hand display method for use in a virtual reality scenario provided by the various method embodiments described above.
  • a terminal for implementing an object processing method in the above virtual scenario is further provided.
  • FIG. 30 is a structural block diagram of a terminal according to an embodiment of the present application.
  • the terminal may include: one or more (only one shown in the figure) processor 3001, memory 3003, and transmission device 3005.
  • the terminal may further include an input and output device 3007.
  • the memory 3003 can be used to store the software program and the module, such as the object processing method and the program instruction/module corresponding to the device in the virtual scene in the embodiment of the present application; or the hand display method and device used in the virtual reality scene Corresponding program instructions/modules.
  • the processor 3001 executes various functional applications and data processing by executing software programs and modules stored in the memory 3003, that is, implementing the object processing method in the virtual scene described above; or, for hand display in a virtual reality scene method.
  • Memory 3003 can include high speed random access memory, and can also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid state memory. In some examples, memory 3003 can further include memory remotely located relative to processor 3001, which can be connected to the terminal over a network. Examples of such networks include, but are not limited to, the Internet, intranets, local area networks, mobile communication networks, and combinations thereof.
  • the above-described transmission device 3005 is for receiving or transmitting data via a network.
  • Specific examples of the above network may include a wired network and a wireless network.
  • the transmission device 3005 includes a Network Interface Controller (NIC) that can be connected to other network devices and routers via a network cable to communicate with the Internet or a local area network.
  • the transmission device 3005 is a Radio Frequency (RF) module for communicating with the Internet wirelessly.
  • NIC Network Interface Controller
  • RF Radio Frequency
  • the memory 3003 is used to store an application.
  • the processor 3001 may invoke an application stored in the memory 3003 to perform the steps of: detecting a first operation performed on the first target object in the real scene; generating a second target object corresponding to the first operation in the virtual scene At least one first menu object, wherein the second target object is a virtual object corresponding to the first target object in the virtual scene; detecting a second operation performed on the first target object, wherein the second operation is for indicating that the virtual object is virtual Moving the second target object to a position where the target menu object in the at least one first menu object is located in the scene; performing a target processing operation in the virtual scene in response to the second operation, wherein the target processing operation is a processing corresponding to the target menu object Operation, each of the at least one first menu object corresponds to a processing operation.
  • the processor 3001 is further configured to perform the steps of: detecting a third operation performed on the first target object; deleting the at least one first menu object in the virtual scene in response to the third operation.
  • the processor 3001 is further configured to perform the step of setting a mark for the target menu object in the virtual scene in response to the second operation, wherein the mark is used to indicate that the second target object moves to a position where the target menu object is located.
  • the processor 3001 is further configured to: obtain a current target scene in the virtual scene when the first operation is detected; and generate a relationship around the second target object in the virtual scene according to the correspondence between the predetermined virtual scene and the menu object. At least one first menu object corresponding to the target scene.
  • the processor 3001 is further configured to: generate at least one first menu object according to a predetermined interval on a predetermined circumference, wherein the predetermined circumference is a circumference formed by a position of the second target object as a center and a predetermined distance is a radius; And generating at least one first menu object in a predetermined arrangement order in a predetermined direction of the second target object, wherein the predetermined direction comprises at least one of: upper, lower, left, and right, and the predetermined arrangement order includes at least one of the following: The order of the lines and the order of the curves are arranged.
  • the processor 3001 is further configured to: generate at least one second menu object in the virtual scene, where the at least one second menu object is a drop-down menu object of the target menu object; and switch the first scene in the virtual scene to a second scenario; setting an attribute of the operation object in the virtual scene as the target attribute; and controlling the operation object in the virtual scene to perform the target task.
  • An embodiment of the present application provides an object processing solution in a virtual scenario. And detecting a first operation performed on the first target object, and then generating, according to the detected first operation, a plurality of first menu objects around the second target object corresponding to the first target object in the virtual scene, and then detecting the pair a second operation performed by the target object, and according to the detected second operation, indicating that the second target object in the virtual scene moves to a position of the target menu object in the first menu object, and the second target object moves in the virtual scene
  • the target processing operation is performed in the virtual scene, so that the operation is performed by converting the 3D space coordinates into the 30D spatial position without simulating the mouse, and the related technology is used to locate the virtual scene by using the emitted ray.
  • the menu options in the 30D menu panel lead to more complicated technical problems in the menu selection operation in the virtual scene, thereby achieving the technical effect of directly performing the operation using the 3D space coordinates, thereby making the menu selection operation in the virtual scene easier.
  • the structure shown in FIG. 30 is only illustrative, and the terminal can be a smart phone (such as an Android mobile phone, an iOS mobile phone, etc.), a tablet computer, a palm computer, and a mobile Internet device (MID). Terminal equipment such as PAD.
  • Fig. 30 does not limit the structure of the above electronic device.
  • the terminal may also include more or fewer components (such as a network interface, display device, etc.) than shown in FIG. 30, or have a different configuration than that shown in FIG.
  • Embodiments of the present application also provide a storage medium.
  • the foregoing storage medium may be used to execute program code of an object processing method in a virtual scene.
  • At least one instruction is stored in the storage medium, and the at least one instruction is loaded and executed by the processor to implement an object processing method in a virtual scene provided by each of the foregoing method embodiments.
  • the present application also provides a computer program product, which, when run on a computer, causes the computer to execute the object processing method in the virtual scene provided by the foregoing various method embodiments.
  • the foregoing storage medium may be located on at least one of the plurality of network devices in the network shown in FIG. 16.
  • the storage medium is arranged to store program code for performing the following steps:
  • the storage medium is further arranged to store program code for performing the steps of: detecting a third operation performed on the first target object; deleting the at least one first menu object in the virtual scene in response to the third operation.
  • the storage medium is further configured to store program code for performing a step of: setting a flag for the target menu object in the virtual scene in response to the second operation, wherein the flag is for indicating that the second target object is moved to the target menu The location of the object.
  • the storage medium is further configured to store program code for performing the following steps: acquiring a current target scene in the virtual scene when the first operation is detected; and corresponding to the menu object in the virtual scene according to the predetermined virtual scene At least one first menu object corresponding to the target scene is generated around the second target object.
  • the storage medium is further configured to store program code for: generating at least one first menu object at a predetermined interval on a predetermined circumference, wherein the predetermined circumference is centered on a position of the second target object, The predetermined distance is a circumference formed by a radius; and the at least one first menu object is generated in a predetermined arrangement order in a predetermined direction of the second target object, wherein the predetermined direction includes at least one of: upper, lower, left, and right,
  • the predetermined arrangement order includes at least one of the following: a linear arrangement order, and a curve arrangement order.
  • the storage medium is further configured to store program code for: generating at least one second menu object in the virtual scene, wherein the at least one second menu object is a drop-down menu object of the target menu object;
  • the first scene in the virtual scene is switched to the second scene;
  • the attribute of the operation object in the virtual scene is set as the target attribute; and the operation object in the virtual scene is controlled to execute the target task.
  • the specific example in this embodiment may refer to the example described in the foregoing embodiment of the object processing method in the virtual scenario, and details are not described herein again.
  • the foregoing storage medium may include, but not limited to, a USB flash drive, a Read-Only Memory (ROM), a Random Access Memory (RAM), a mobile hard disk, and a magnetic memory.
  • ROM Read-Only Memory
  • RAM Random Access Memory
  • a mobile hard disk e.g., a hard disk
  • magnetic memory e.g., a hard disk
  • the integrated unit in the above embodiment if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in the above-described computer readable storage medium.
  • the technical solution of the present application in essence or the contribution to the prior art, or all or part of the technical solution may be embodied in the form of a software product, which is stored in a storage medium.
  • a number of instructions are included to cause one or more computer devices (which may be a personal computer, server or network device, etc.) to perform all or part of the steps of the methods described in various embodiments of the present application.
  • the disclosed client may be implemented in other manners.
  • the device embodiments described above are merely illustrative.
  • the division of the unit is only a logical function division.
  • multiple units or components may be combined or may be Integrate into another system, or some features can be ignored or not executed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, unit or module, and may be electrical or otherwise.
  • a person skilled in the art may understand that all or part of the steps of implementing the above embodiments may be completed by hardware, or may be instructed by a program to execute related hardware, and the program may be stored in a computer readable storage medium.
  • the storage medium mentioned may be a read only memory, a magnetic disk or an optical disk or the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • Architecture (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biophysics (AREA)
  • Cardiology (AREA)
  • General Health & Medical Sciences (AREA)
  • Heart & Thoracic Surgery (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

本申请实施例公开了一种用于虚拟现实场景中的手型显示方法及装置,属于虚拟现实领域。所述方法包括:显示第一手型对象,所述第一手型对象包括有沿手指前方延伸且隐藏显示的射线;当所述射线与虚拟物品相交时,显示第二手型对象,所述第二手型对象包括有沿手指前方延伸且显式显示的射线;当接收到选择指令时,显示第三手型对象,所述第三手型对象是握持所述虚拟物品的手型对象;可以解决在VR系统中只能通过虚拟手与虚拟物品的接触,才能抓取虚拟物品,导致抓取远距离的虚拟物品的效率不高的问题;由于虚拟手可以通过沿手指前方延伸的射线抓取虚拟物品,可以实现VR系统中抓取远距离的虚拟物品的功能。

Description

用于虚拟现实场景中的手型显示方法及装置
本申请实施例要求于2017年04月25日提交中国国家知识产权局、申请号为201710278577.2、发明名称为“用于虚拟现实场景中的手型显示方法及装置”的中国专利申请和2017年04月26日提交中国国家知识产权局、申请号为201710292385.7、发明名称为“虚拟场景中的对象处理方法和装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请实施例中。
技术领域
本申请实施例涉及虚拟现实(Virtual Reality,VR)领域,特别涉及一种用于虚拟现实场景中的手型显示方法及装置。
背景技术
在VR系统提供的虚拟环境中,大多都需要用户的双手操作VR手柄来与虚拟物品产生交互。
在一种典型的VR系统中,VR手柄上提供有与手指对应的按键,在虚拟环境中为用户提供有虚拟手,虚拟手的位置会跟随VR手柄的移动而移动。当手指按下按键时,虚拟环境中虚拟手的手指收起呈弯曲状态;当手指松开按键时,虚拟环境中虚拟手的手指翘起呈舒展状态。当虚拟场景中的虚拟手与虚拟物品相接触时,若拇指和食指同时按压各自对应的按键时,则虚拟手可以将虚拟环境中的虚拟物品抓取在手心中。
上述交互方式是一种近场交互方式,当用户需要通过虚拟手抓取虚拟环境中距离较远的虚拟物品时,只能通过移动VR手柄将虚拟手移动至与该虚拟物品相接触的位置,从而实现抓取该虚拟物品。
发明内容
本申请实施例提供了一种用于虚拟现实场景中的手型显示方法及装置,可以解决在VR系统中只能通过虚拟手与虚拟物品的接触,才能抓取虚拟物品的问题。所述技术方案如下:
一方面,提供了一种用于虚拟现实场景中的手型显示方法,用于虚拟现实主机中,所述方法包括:
显示第一手型对象,所述第一手型对象包括有沿手指前方延伸且隐藏显示的射线,所述第一手型对象是指虚拟手未握持虚拟物品或未指示待握持的虚拟物品时的姿势动画,所述虚拟物品是所述虚拟手能拿起、握持、放下的虚拟物品;
当根据输入设备发送的输入信号确定所述射线与所述虚拟物品相交时,显示第二手型对象,所述第二手型对象包括有沿手指前方延伸且显式显示的射线;
当接收到选择指令时,显示第三手型对象,所述第三手型对象是握持所述虚拟物品的手型对象。
另一方面,提供了一种虚拟场景中的对象处理方法,用于虚拟现实主机中,所述方法包括:
检测对真实场景中的第一目标对象执行的第一操作;
响应于所述第一操作在虚拟场景中生成第二目标对象对应的至少一个第一菜单对象,其中,所述第二目标对象为所述第一目标对象在所述虚拟场景中所对应的虚拟对象;
检测对所述第一目标对象执行的第二操作,其中,所述第二操作用于指示在所述虚拟场景中将所述第二目标对象移动至所述至少一个第一菜单对象中的目标菜单对象所在的位置;
响应于所述第二操作在所述虚拟场景中执行目标处理操作,其中,所述目标处理操作为所述目标菜单对象对应的处理操作,所述至少一个第一菜单对象中的每个第一菜单对象对应一种处理操作。
可选地,在所述响应于所述第一操作在虚拟场景中生成第二目标对象对应的至少一个第一菜单对象之后,所述方法还包括:
检测对所述第一目标对象执行的第三操作;
响应于所述第三操作在所述虚拟场景中删除所述至少一个第一菜单对象。
可选地,在所述响应于所述第二操作在所述虚拟场景中执行目标处理操作之前,所述方法还包括:
响应于所述第二操作在所述虚拟场景中为所述目标菜单对象设置标记,其中,所述标记用于指示所述第二目标对象移动至目标菜单对象所在的位置。
可选地,所述响应于所述第一操作在虚拟场景中生成第二目标对象对应的至少一个第一菜单对象包括:
可选地,获取检测到所述第一操作时所述虚拟场景中当前的目标场景;
按照预定的虚拟场景与菜单对象的对应关系在所述虚拟场景中的所述第二目标对象的周围生成与所述目标场景相对应的所述至少一个第一菜单对象。
可选地,所述响应于所述第一操作在虚拟场景中生成第二目标对象对应的至少一个第一菜单对象包括以下至少之一:
在预定圆周上按照预定间隔生成所述至少一个第一菜单对象,其中,所述预定圆周为以所述第二目标对象所在位置为圆心,预定距离为半径所构成的圆周;
在所述第二目标对象的预定方向上按照预定排列顺序生成所述至少一个第一菜单对象,其中,所述预定方向包括以下至少之一:上方、下方、左方、右方,所述预定排列顺序包括以下至少之一:直线排列顺序、曲线排列顺序。
可选地,所述响应于所述第二操作在所述虚拟场景中执行目标处理操作包括以下至少之一:
在所述虚拟场景中生成至少一个第二菜单对象,其中,所述至少一个第二菜单对象为所述目标菜单对象的下拉菜单对象;
将所述虚拟场景中的第一场景切换至第二场景;
将所述虚拟场景中操作对象的属性设置为目标属性;
控制所述虚拟场景中的操作对象执行目标任务。
另一方面,提供了一种用于虚拟现实场景中的手型显示装置,所述装置包括:
第一显示模块,用于显示第一手型对象,所述第一手型对象包括有沿手指前方延伸且隐藏显示的射线,所述第一手型对象是指虚拟手未握持虚拟物品或未指示待握持的虚拟物 品时的姿势动画,所述虚拟物品是所述虚拟手能拿起、握持、放下的虚拟物品;
第二显示模块,用于当根据输入设备发送的输入信号确定所述射线与所述虚拟物品相交时,显示第二手型对象,所述第二手型对象包括有沿手指前方延伸且显式显示的射线;
第三显示模块,用于当接收到选择指令时,显示第三手型对象,所述第三手型对象是握持所述虚拟物品的手型对象。
另一方面,提供了一种虚拟场景中的对象处理装置,所述装置包括:
第一检测单元,用于检测对真实场景中的第一目标对象执行的第一操作;
第一响应单元,用于响应于所述第一操作在虚拟场景中生成第二目标对象对应的至少一个第一菜单对象,其中,所述第二目标对象为所述第一目标对象在所述虚拟场景中所对应的虚拟对象;
第二检测单元,用于检测对所述第一目标对象执行的第二操作,其中,所述第二操作用于指示在所述虚拟场景中将所述第二目标对象移动至所述至少一个第一菜单对象中的目标菜单对象所在的位置;
第二响应单元,用于响应于所述第二操作在所述虚拟场景中执行目标处理操作,其中,所述目标处理操作为所述目标菜单对象对应的处理操作,所述至少一个第一菜单对象中的每个第一菜单对象对应一种处理操作。
可选地,所述装置还包括:
第三检测单元,用于在所述响应于所述第一操作在虚拟场景中生成第二目标对象对应的至少一个第一菜单对象之后,检测对所述第一目标对象执行的第三操作;
第三响应单元,响应于所述第三操作在所述虚拟场景中删除所述至少一个第一菜单对象。
可选地,所述装置还包括:
第四响应单元,用于在所述响应于所述第二操作在所述虚拟场景中执行目标处理操作之前,响应于所述第二操作在所述虚拟场景中为所述目标菜单对象设置标记,其中,所述标记用于指示所述第二目标对象移动至目标菜单对象所在的位置。
可选地,所述第一响应单元包括:
获取模块,用于获取检测到所述第一操作时所述虚拟场景中当前的目标场景;
第一生成模块,用于按照预定的虚拟场景与菜单对象的对应关系在所述虚拟场景中的所述第二目标对象的周围生成与所述目标场景相对应的所述至少一个第一菜单对象。
可选地,所述第一响应单元包括以下模块中的至少之一:
第二生成模块,用于在预定圆周上按照预定间隔生成所述至少一个第一菜单对象,其中,所述预定圆周为以所述第二目标对象所在位置为圆心,预定距离为半径所构成的圆周;
第三生成模块,用于在所述第二目标对象的预定方向上按照预定排列顺序生成所述至少一个第一菜单对象,其中,所述预定方向包括以下至少之一:上方、下方、左方、右方,所述预定排列顺序包括以下至少之一:直线排列顺序、曲线排列顺序。
可选地,所述第二响应单元包括以下模块中的至少之一:
第四生成模块,用于在所述虚拟场景中生成至少一个第二菜单对象,其中,所述至少一个第二菜单对象为所述目标菜单对象的下拉菜单对象;
切换模块,用于将所述虚拟场景中的第一场景切换至第二场景;
设置模块,用于将所述虚拟场景中操作对象的属性设置为目标属性;
控制模块,用于控制所述虚拟场景中的操作对象执行目标任务。
另一个方面,提供了一种虚拟现实主机,所述虚拟现实主机包括处理器和存储器,所述存储器中存储有至少一条指令,所述至少一条指令由所述处理器加载并执行以实现上述方面所提供的用于虚拟现实场景中的手型显示方法;或者,实现上述方面所提供的虚拟场景中的对象处理方法。
另一个方面,提供了一种计算机可读存储介质,所述存储介质中存储有至少一条指令,所述至少一条指令由所述处理器加载并执行以实现上述方面所提供的用于虚拟现实场景中的手型显示方法;或者,实现上述方面所提供的虚拟场景中的对象处理方法。
本申请实施例提供的技术方案带来的有益效果是:
通过当第一手型对象中沿手指前方延伸且隐藏显示的射线与虚拟环境中的虚拟物品相交时,显示第二手型对象,并显示该第二手型对象中沿手指前方延伸的射线,使得用户可以获知第二手型对象指示的虚拟物品是哪一个;当接收到选择指令时,显示第三手型对象,使得用户获知在虚拟环境中已抓取到该虚拟物品;可以解决在VR系统中只能通过虚拟手与虚拟物品的接触,才能抓取虚拟物品的问题;由于虚拟手可以通过沿手指前方延伸的射线远距离抓取虚拟物品,可以实现VR系统中抓取远距离的虚拟物品的功能。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1A是本申请一个实施例提供的用于虚拟现实场景中的手型显示系统的结构示意图;
图1B是本申请一个实施例提供的用于虚拟现实场景中的输入设备的结构示意图;
图2是本申请一个实施例提供的用于虚拟现实场景中的手型显示方法的流程图;
图3是本申请一个实施例提供的虚拟现实场景中的第一手型对象的示意图;
图4是本申请一个实施例提供的虚拟现实场景中的第二手型对象和虚拟物品的示意图;
图5是本申请一个实施例提供的虚拟现实场景中的第二手型对象切换至第三手型对象的示意图;
图6是本申请一个实施例提供的虚拟现实场景中的第三手型对象的示意图;
图7是本申请一个实施例提供的虚拟现实场景中的第三手型对象的示意图;
图8是本申请一个实施例提供的虚拟现实场景中的第三手型对象的示意图;
图9是本申请一个实施例提供的虚拟现实场景中的第三手型对象的示意图;
图10是本申请一个实施例提供的虚拟现实场景中的第三手型对象切换至第一手型对象的示意图;
图11是本申请一个实施例提供的虚拟现实场景中的第三手型对象切换至第一手型对象的示意图;
图12是本申请一个实施例提供的虚拟现实场景中的第四手型对象的示意图;
图13是本申请一个实施例提供的虚拟现实场景中的第五手型对象的示意图;
图14是本申请一个实施例提供的虚拟现实场景中第一手型对象切换至第三手型对象的示意图;
图15是本申请另一个实施例提供的用于虚拟现实场景中的手型显示方法的流程图;
图16是根据本申请实施例的虚拟场景中的对象处理方法的硬件环境的示意图;
图17是根据本申请实施例的一种可选的虚拟场景中的对象处理方法的流程图;
图18是根据本申请实施例的一种可选的虚拟现实环境下手部菜单的示意图;
图19是根据本申请实施例的另一种可选的虚拟现实环境下手部菜单的示意图;
图20是根据本申请实施例的一种可选的虚拟现实环境下手部菜单的示意图;
图21是根据本申请实施例的一种可选的菜单控制逻辑的示意图;
图22是本申请一个实施例提供的用于虚拟现实场景中的手型显示装置的框图;
图23是根据本申请实施例的一种可选的虚拟场景中的对象处理装置的示意图;
图24是根据本申请实施例的另一种可选的虚拟场景中的对象处理装置的示意图;
图25是根据本申请实施例的另一种可选的虚拟场景中的对象处理装置的示意图;
图26是根据本申请实施例的另一种可选的虚拟场景中的对象处理装置的示意图;
图27是根据本申请实施例的另一种可选的虚拟场景中的对象处理装置的示意图;
图28是根据本申请实施例的另一种可选的虚拟场景中的对象处理装置的示意图;
图29是本申请一个实施例提供的VR系统的框图;
图30是根据本申请实施例的一种终端的结构框图。
具体实施方式
为使本申请的目的、技术方案和优点更加清楚,下面将结合附图对本申请实施方式作进一步地详细描述。
首先,在对本申请实施例进行描述的过程中出现的部分名词或者术语适用于如下解释:
VR:虚拟实境(英语:virtual reality,缩写为VR),简称虚拟技术,也称虚拟环境,是利用电脑模拟产生一个三维空间的虚拟世界,提供用户关于视觉等感官的模拟,让用户感觉仿佛身历其境,可以及时、没有限制地观察三维空间内的事物。
Steam:是美国维尔福于2003年9月12日推出的数字发行、数字版权管理及社交系统,它用于数字软件及游戏的发行销售与后续更新,支持Windows、OS X和Linux等操作系统,目前是全球最大的PC数字游戏平台。
Steam VR:是一个功能完整的360°房型空间虚拟现实体验。此开发套件包含了一个头戴式显示器、两个单手持控制器、一个能于空间内同时追踪显示器与控制器的定位系统。
Oculus:是一间美国虚拟实境科技公司,由帕尔默·拉奇与布伦丹·艾瑞比(Brendan Iribe)成立。他们的首件产品Oculus Rift是一款逼真的虚拟实境头戴式显示器。
Oculus Touch:是Oculus Rift的动作捕捉手柄,配合空间定位系统使用,Oculus Touch采用了类似手环的设计,允许摄像机对用户的手部进行追踪,传感器也可以追踪手指运动,同时还为用户带来便利的抓握方式。
请参考图1A,其示出了本申请一个实施例提供的VR系统的结构示意图。该VR系统包括:头戴式显示器120、虚拟现实主机140和输入设备160。
头戴式显示器120是用于佩戴在用户头部进行图像显示的显示器。头戴式显示器120通常包括佩戴部和显示部,佩戴部包括用于将头戴式显示器120佩戴在用户头部的眼镜腿及弹性带,显示部包括左眼显示屏和右眼显示屏。头戴式显示器120能够在左眼显示屏和右眼显示屏显示不同的图像,从而为用户模拟出三维虚拟环境。
可选地,头戴式显示器120上设置有运动传感器,用于捕捉用户的头部动作,以使得虚拟现实主机140改变头戴式显示器120中的虚拟头部的显示画面。
头戴式显示器120通过柔性电路板或硬件接口或数据线或无线网络,与虚拟现实主机140电性相连。
虚拟现实主机140用于建模三维虚拟环境、生成三维虚拟环境所对应的三维显示画面、生成三维虚拟环境中的虚拟物体等。当然,虚拟现实主机140也可以建模二维虚拟环境、生成二维虚拟环境所对应的二维显示画面、生成二维虚拟环境中的虚拟物体;或者,虚拟现实主机140可以建模三维虚拟环境、根据用户的视角位置生成该三维虚拟环境所对应的二维显示画面、生成三维虚拟环境中虚拟物体的二维投影画面等,本实施例对此不作限定。
可选地,虚拟现实主机140可以集成在头戴式显示器120的内部,也可以集成在与头戴式显示器120不同的其它设备中,本实施例对此不作限定。本实施例中,以虚拟现实主机140集成在与头戴式显示器120不同的其它设备中为例进行说明。其中,其它设备可以为台式计算机或服务器等,本实施例对此不作限定。也即,虚拟现实主机140在实际实现时,既可以是头戴式显示器120中的部分组件(软件、硬件或者软硬件结合的组件);或者,也可以是终端;或者,也可以是服务器。
虚拟现实主机140接收输入设备160的输入信号,并根据该输入信号生成头戴式显示器120的显示画面。虚拟现实主机140通常由设置在电路板上的处理器、存储器、图像虚拟现实主机等电子器件实现。可选地,虚拟现实主机140还包括图像采集装置,用于捕捉用户的头部动作,并根据用户的头部动作改变头戴式显示器120中的虚拟头部的显示画面。
虚拟现实主机140通过线缆、蓝牙连接或Wi-Fi(Wireless-Fidelity,无线保真)连接与输入设备160相连。
输入设备160可以是体感手套、体感手柄、遥控器、跑步机、鼠标、键盘、人眼聚焦设备中的至少一种输入外设。
当然,输入设备160也可以称为其他名称,比如:第一目标对象、第一目标设备等,本实施对此不作限定。
可选地,本实施例中,以输入设备160为需要用户双手操控的设备为例进行说明,比如,输入设备160为体感手套或者体感手柄。
请参考图1B,用户双手操控的输入设备160通常包括一对VR手柄161、162,在现实环境中的左手和右手分别操控一对VR手柄中的一个,比如:左手操控VR手柄161,右手操控VR手柄162。VR手柄161或162中设置有物理按键163和运动传感器164。通常,运动传感器164设置在VR手柄的内部。
物理按键163用于接收用户触发的操作指令。物理按键163设置在每个VR手柄上的位置是根据用户握持该VR手柄时手指所处的位置确定的。比如:图1B中,用户的右手拇指所处的位置设置有物理按键1631、食指所处的位置设置有物理按键1632。
可选地,对于同一根手指作用的区域可以设置有多个物理按键163,比如:在拇指的作 用区域设置了3个物理按键163,这3个物理按键163均由拇指操控;或者,多根手指作用的区域中可以仅设置有一个物理按键163,比如:在无名指和小拇指作用的区域中仅设置了一个物理按键163,这个物理按键163可以由无名指控制,也可以由小拇指控制。本实施例不对每个VR手柄上设置的物理按键的个数作限定。
可选地,物理按键163实现选择功能、查看菜单栏功能和压力检测功能中的至少一种。每种功能可以是由一个物理按键实现的,也可以是由多个物理按键实现的。
比如:VR手柄161上设置有摇杆、扳机键、菜单栏按键和压力检测按键,当菜单栏按键被按下时,显示VR系统提供的各项服务(实现查看菜单栏功能),比如:各种游戏选项或者各种电影选项等;当扳机键被按下时,显示用户选择的游戏(通过单个物理按键实现选择功能),比如:打乒乓球,则显示打乒乓球的虚拟环境;在该虚拟环境中,当摇杆和扳机键同时被按下时,虚拟手选择了虚拟环境中的虚拟乒乓球(通过两个物理按键实现选择功能);当压力检测按键被按下时,根据压力检测按键检测到的压力数据调节虚拟手抓取虚拟乒乓球的力度(实现压力检测功能)。
当然,物理按键163还可以用于实现其它功能,比如:用于实现启动和/或关闭输入设备160的功能的系统按键、用于实现检测用户是否正在握持输入设备160的功能的握持键等,本实施例在此不再一一列出。
可选地,不同VR手柄上设置的物理按键163的作用可以相同,也可以不同,本实施例对此不作限定。比如:VR手柄161上设置有用于实现选择功能的物理按键1632,而VR手柄162上没有设置该物理按键1632。
可选地,上述物理按键163中的部分或全部可实现为通过触摸屏实现的虚拟按键,本实施例对此不作限定。
可选地,当输入设备160为体感手套时,上述物理按键163实现的功能,可以通过设置在体感手套上的物理按键实现,也可以通过操控体感手套形成预设的功能手势来实现,本实施例对此不作限定。
运动传感器164用于采集输入设备160的空间姿态。运动传感器164可以是重力加速度传感器1641、陀螺仪传感器1642或距离传感器1643中的任意一种。
作为运动传感器164的一种,重力加速度传感器1641可检测各个方向上(一般为三轴)加速度的大小,静止时可检测出重力的大小及方向,这样,虚拟现实主机140根据重力加速度传感器1641输出的数据就可以判断出用户的真实手的姿态、移动方向以及移动距离,从而使得头戴式显示器120在显示的虚拟环境中根据虚拟现实主机140判断出的移动方向和移动距离移动虚拟手。
陀螺仪传感器1642可检测各个方向上角速度的大小,检测出输入设备160的旋转动作,这样,虚拟现实主机140根据陀螺仪传感器输出的数据就可以判断出用户的真实手的旋转方向和旋转角度,从而使得头戴式显示器120在显示的虚拟环境中根据虚拟现实主机140判断出的旋转方向和旋转角度旋转虚拟手。可选地,虚拟现实主机140根据陀螺仪传感器输出的数据也可以判断出虚拟手选择的虚拟物品的旋转方向和旋转角度。
距离传感器1643可判断用户的手指距离输入设备160的距离,这样,虚拟现实主机140根据距离传感器1643输出的数据就可以判断出用户的手型对象,从而使得头戴式显示器120在显示的虚拟环境中根据虚拟现实主机140判断出的手型对象显示虚拟手的手型对象。比 如:虚拟现实主机140根据距离传感器判断出的用户的手型对象为“竖起大拇指”,那么,头戴式显示器120显示虚拟手的手型对象也为“竖起大拇指”。
可选地,上述各个类型的运动传感器164的数量可以为一个,也可以为多个,本实施例对此不作限定。
其中,手型对象是用于呈现虚拟手的姿态形状的显示对象。在不同的手型对象中,存在至少一个手指形状和/或手掌形状是不同的。虚拟现实主机140通过在一个完整的网格模型上设置手的骨架,通过该骨架来制作出的预定义的虚拟手的姿势动画。该虚拟手的姿势动画可以是根据三维的网格模型制作出的二维形式的姿势动画,也可以是根据三维的网格模型制作出的三维形式的姿势动画,还可以是根据二维的手部模型制作出的二维形式的姿势动画,本实施例对此不作限定。
可选地,手型对象由一帧固定不动的(静止的)姿势动画构成,也可以由多帧具有动态效果的姿势动画构成。
可选地,输入设备160还可以包括其它组件,比如:用于控制各个组件的处理器165、用于与虚拟现实主机140进行通信的通信组件166等,本实施例对此不作限定。
请参考图2,其示出了本申请一个实施例提供的用于虚拟现实场景中的手型显示方法的流程图。本实施例以该用于虚拟现实场景中的手型显示方法应用于图1所示的VR系统中来举例说明。该方法可以包括如下步骤:
步骤201,显示第一手型对象,第一手型对象包括有沿手指前方延伸且隐藏显示的射线。
第一手型对象是指虚拟手在空闲状态下的姿势动画。即,当虚拟手未握持虚拟物品,或者,虚拟手未指示待握持的虚拟物品时的姿势动画。
可选地,第一手型对象通过五指张开,且各个手指略微自然弯曲的姿势动画来表示;或者,通过握拳的姿势动画来表示,本实施例不对第一手型对象的具体姿势动画作限定。假设第一手型对象如图3中的31所示,根据图3可知,第一手型对象31是虚拟手的五指张开,且各个手指略微自然弯曲的姿势动画。
本实施例中,沿手指前方射出的隐藏显示的射线用于指示虚拟手指在虚拟场景中指示的虚拟物品。
发出射线的手指可以为虚拟场景中的任意一只虚拟手的任意一根手指,本实施例对此不作限定。比如:发出射线的手指为虚拟右手的食指。可选地,该射线可以分别沿着一双虚拟手的一根手指的前方分别射出;或者,该射线可以分别沿着右虚拟手的一根手指的前方分别射出;或者,该射线可以分别沿着一双虚拟手的多根手指的前方分别射出,本实施例对此不作限定。
虚拟物品是指在虚拟环境中能够与虚拟手进行交互的物品,虚拟物品通常是虚拟手能够拿起、握持、放下的虚拟物品,比如:虚拟武器、虚拟水果、虚拟餐具等。
隐藏显示是指该射线在逻辑上是存在的,只是没有通过头戴式显示器显示给用户。
可选地,该射线在显示第一手型对象时进行显式显示。
步骤202,当根据输入设备发送的输入信号确定射线与虚拟物品相交时,显示第二手型对象,第二手型对象包括有沿手指前方延伸且显式显示的射线。
虚拟现实主机在通过头戴式显示器显示了第一手型对象后,需要根据输入设备发送的 输入信号检测射线是否与虚拟物品相交;若相交,则说明第一手型对象前方存在能够与虚拟手进行交互的虚拟物品,显示第二手型对象;若不相交,则说明第一手型对象前方不存在能够与虚拟手进行交互的虚拟物品,此时,保持第一手型对象不变,并继续接收输入设备发送的输入信号。
可选地,虚拟现实主机在接收到输入设备发送输入信号时检测射线是否与虚拟物品相交;或者,虚拟现实主机每隔预定时间间隔检测射线是否与虚拟物品相交,该预定时间间隔比输入设备发送输入信号的时间间隔长,本实施例不对虚拟现实主机检测射线是否与虚拟物品相交的时机此作限定。
虚拟现实主机检测射线是否与虚拟物品相交,包括:虚拟现实主机接收输入设备发送的输入信号;根据输入信号确定第一手型对象在虚拟环境中的手型位置;根据手型位置确定射线在虚拟环境中的射线位置;检测射线位置是否与虚拟物品在虚拟环境中的物品位置相重叠;若射线位置与物品位置相重叠,则确定射线与虚拟物品相交。
其中,输入信号是根据第一手型对象对应的真实手在现实环境中的运动情况所采集的信号,比如输入信号通过VR手柄中的运动传感器所采集的传感器信号,该输入信号用于表示真实手的移动方向、移动距离、旋转角速度、旋转方向、真实手的各个手指与输入设备之间的距离中的至少一种数据。
手型位置是指虚拟手在虚拟环境中对应的坐标集;射线位置是指隐藏显示的射线在虚拟环境中对应的坐标集;物品坐标集是指虚拟物品在虚拟环境中对应的坐标集。
可选地,当虚拟环境为三维环境时,上述坐标集为三维坐标集;当虚拟环境为二维环境时,上述坐标集为二维坐标集。
检测射线位置是否与虚拟物品在虚拟环境中的物品位置相重叠是指:检测射线位置对应的坐标集与物品位置对应的坐标集是否存在交集。
第二手型对象是指虚拟手在指示某一虚拟物品时的姿势动画。
可选地,第二手型对象可以通过食指伸直,其它手指握拳的姿势动画来表示;或者,也可以通过食指和拇指伸直,其它手指握拳的姿势动画(枪型手势)来表示,本实施例不对第二手型对象的具体姿势动画作限定。假设第二手型对象如图4中的41所示,由图4可知,第二手型对象41是虚拟手的食指伸直,其它手指握拳的姿势动画。
第二手型对象包括沿手指前方延伸且显式显示的射线。其中,显式显示的射线是指头戴式显示器在显示第二手型对象时以用户可见的方式所显示的沿手指前方延伸的射线。
可选地,第二手型对象的手指延伸出的显式显示的射线可以是第一手型对象中沿手指前方延伸且隐藏显示的射线的显式显示形式,即,第二手型对象中的射线与第一手型对象中的射线为同一条射线;或者,第二手型对象中的射线也可以是虚拟现实主机重新生成的,本实施例对此不作限定。
可选地,第二手型对象中延伸出射线的手指与第一手型对象中延伸出射线的手指相同。
可选地,第二手型对象延伸出的射线也与虚拟物品相交,比如:图4中,第二手型对象41延伸出的射线42与虚拟物品43相交。
可选地,当射线与虚拟物品相交时,头戴式显示器不仅从第一手型对象切换至第二手型对象,还会以预设显示方式显示该虚拟物品,该预设显示方式区别于该虚拟物品的原始显示方式。也即,该预设显示方式与原始显示方式不同,用于突出该虚拟物品。其中,预 设显示方式包括:放大显示、以预定颜色显示、以预定形式的轮廓线条显示等,本实施例对此不作限定。比如:在图4中,虚拟物品43以加粗的轮廓线条显示。
可选地,当头戴式显示器显示第二手型对象时,也可以隐藏显示沿手指前方延伸的射线,本实施例对此不作限定。
可选地,上述射线的长度可以是无限长、10米、2米、1米或者其它长度。该射线的长度为可配置的。
步骤203,当接收到选择指令时,显示第三手型对象。
选择指令是输入设备根据接收到的用户触发的选择操作生成的。其中,选择操作可以是用户触发的设置在输入设备上的实体按键和/或虚拟按键的操作,也可以是用户输入语音消息的操作,还可以是体感手套所采集的手指弯曲数据所触发的操作,本实施例对此不作限定。
第三手型对象是握持虚拟物品的手型对象。可选地,第三手型对象不包括沿手指前方延伸的射线。
请参考图5,假设虚拟显示主机在接收到选择指令前,显示第二手型对象51,该第二手型对象51延伸出的射线52与虚拟物品“星星”相交,则在接收到选择指令时,显示第三手型对象53将“星星”握在手中。
可选地,由于第三手型对象反映的是虚拟手抓到虚拟物品时的姿势动画,而在实际场景中,对于不同形状的物品,用户抓取该物品时的手势会有所不同,比如:用户抓取的物品是手枪模型,则用户抓取手枪模型的手势为握抢的手势;又比如:用户抓取的物品是笔,则用户抓取笔的手势为持笔的手势,因此,在虚拟场景中,头戴式显示器会根据虚拟物品的类型,显示与该类型的虚拟物品对应的第三手型对象。
假设当虚拟现实主机识别出虚拟物品为球型类型的虚拟物品时,头戴式显示器显示的与该球形类型对应的第三手型对象如图6中的61所示,根据图6可知,第三手型对象61为五指沿着球壁弯曲的姿势动画。
当虚拟现实主机识别出虚拟物品为枪型类型的虚拟物品时,头戴式显示器显示的与该枪型类型对应的第三手型对象如图7中的71所示,根据图7可知,第三手型对象71为食指勾住扳机,其它食指握持枪体的姿势动画。
当虚拟现实主机识别出虚拟物品为笔型类型的虚拟物品时,头戴式显示器显示的与该笔型类型对应的第三手型对象如图8中的81所示,根据图8可知,第三手型对象81为食指、拇指和中指捏住笔管,其它手指弯曲的姿势动画。
当虚拟现实主机识别出虚拟物品为杆型类型的虚拟物品时,头戴式显示器显示的与该杆型类型对应的第三手型对象如图9中的91所示,根据图9可知,第三手型对象91为各个手指围绕虚拟物品弯曲的姿势动画。
当然,第三手型对象还可以为其它类型的虚拟物品对应的姿势动画,本实施例在此不再一一列举。
综上所述,本实施例提供的用于虚拟现实场景中的手型显示方法,通过当第一手型对象中沿手指前方延伸且隐藏显示的射线与虚拟环境中的虚拟物品相交时,显示第二手型对象,并显示该第二手型对象中沿手指前方延伸的射线,使得用户可以获知第二手型对象指示的虚拟物品是哪一个;当接收到选择指令时,显示第三手型对象,使得用户获知在虚拟 环境中已抓取到该虚拟物品;可以解决在VR系统中只能通过虚拟手与虚拟物品的接触,才能抓取虚拟物品,导致抓取远距离的虚拟物品的效率不高的问题;由于虚拟手可以通过沿手指前方延伸的射线抓取虚拟物品,可以实现VR系统中抓取远距离的虚拟物品的功能。
另外,由于虚拟现实主机无需输入设备发送操作指令,根据射线与虚拟物品是否相交就可以确定出是否需要切换不同的手型对象,使得本实施例中的手型显示方法可以适用于大部分类型的输入设备,可以扩大手型显示方法的应用范围。
另外,通过在射线与虚拟物品相交时,显式显示该射线;在显示第一手型对象时,隐藏显示该射线,可以达到既保证了用户获知第二手型对象指示了哪一个虚拟物品,又节省了显示资源的效果。
可选地,在步骤203之后,当虚拟现实主机接收到输入设备发送的放置指令时,通过头戴式显示器显示第一手型对象。
其中,放置指令是输入设备根据接收到的用户触发的放置操作生成的。其中,放置操作的操作方式与选择操作的操作方式不同,放置操作可以是用户触发的设置在输入设备上的实体按键和/或虚拟按键的操作,也可以是用户输入语音消息的操作,本实施例对此不作限定。
在一种情况下,放置指令用于指示虚拟现实主机将第三手型对象握持的虚拟物品,放置到该虚拟物品在虚拟环境中垂直投影的位置,即,头戴式显示器显示的放置过程为虚拟物品垂直下落的过程。
请参考图10,假设虚拟显示主机在接收到放置指令前,显示第三手型对象1001,该第三手型对象1001将“星星”握在手中,则在虚拟显示主机在接收到放置指令时,“星星”垂直下落至位置1002,并显示第一手型对象1003。
在另一种情况下,放置指令用于指示虚拟现实主机将第三手型对象握持的虚拟物品,放置到该虚拟物品在被选择之前所处的位置,即,头戴式显示器显示的放置过程为虚拟物品被抛向原来的位置的过程。
请参考图11,假设虚拟显示主机在接收到放置指令前,显示第三手型对象1101,该第三手型对象1101将“星星”握在手中,则在虚拟显示主机在接收到放置指令时,“星星”被抛向原来的位置1102,并显示第一手型对象1103。
可选地,当VR系统显示第一手型对象时,输入设备可以接收用户触发的对预定按键的用户操作,该用户操作包括按压操作和松开操作中的一种。其中,预定按键是设置于输入设备上的物理按键或者虚拟按键,本实施对此不作限定。
当输入设备接收到对预定按键的按压操作时,向虚拟现实主机发送根据该按压操作生成的按压指令,该按压指令携带该预定按键的标识。相应地,当虚拟现实主机接收到该预定按键对应的按压指令时,显示第四手型对象,第四手型对象是与预定按键对应的手指呈弯曲状态的姿势动画。
可选地,虚拟现实主机中预存有每个预定按键的标识对应的第四手型对象,这样,根据接收到的按压指令中的预定按键的标识,可以确定出预定按键的标识对应的第四手型对象,从而对该第四手型对象进行显示。
请参考图12,假设用户的食指按压了输入设备121上,与该食指的位置对应的预定按键122,则输入设备生成按压指令,并将该按压指令发送至虚拟现实主机123,该按压指令携带预定按键122的标识;虚拟现实主机123接收到预定按键122对应的按压指令后,通过头戴式显示器显示该预定按键122的标识对应的第四手型对象124。
在头戴式显示器显示了第四手型对象之后,当输入设备接收到对预定按键的松开操作时,向虚拟现实主机发送根据该松开操作生成的松开指令,该松开指令携带该预定按键的标识。相应地,当虚拟现实主机接收到预定按键对应的松开指令时,显示第五手型对象,该第五手型对象是与预定按键对应的手指呈舒展状态的手型对象。
可选地,虚拟现实主机中预存有每个预定按键的标识对应的第五手型对象,这样,根据接收到的按压指令中的预定按键的标识,可以确定出预定按键的标识对应的第五手型对象,从而对该第五手型对象进行显示。
请参考图13,假设用户的食指按压了输入设备131上,与该食指的位置对应的预定按键132之后,松开了该预定按键132;则输入设备生成松开指令,并将该松开指令发送至虚拟现实主机133,该松开指令携带预定按键132的标识;虚拟现实主机133接收到预定按键132对应的松开指令后,通过头戴式显示器显示该预定按键132的标识对应的第五手型对象134。
可选地,第五手型对象可以与第一手型对象相同,也可以与第一手型对象不同,本实施例对此不作限定。
可选地,在步骤201之后,虚拟现实主机还需要根据输入设备发送的输入信号检测第一手型对象是否与虚拟物品相交;若相交,则说明第一手型对象接触了能够与虚拟手进行交互的虚拟物品,此时,虚拟现实主机控制头戴式显示器直接显示第三手型对象,跳过显示第二手型对象的过程;若不相交,则说明第一手型对象未接触能够与虚拟手进行交互的虚拟物品,此时,保持第一手型对象不变,并继续接收输入设备发送的输入信号;或者,执行步骤202。
可选地,虚拟现实主机可以在接收到输入设备发送输入信号时检测第一手型对象是否与虚拟物品相交;或者,虚拟现实主机可以每隔预定时间间隔检测第一手型对象是否与虚拟物品相交,该预定时间间隔比输入设备发送输入信号的时间间隔长,本实施例不对虚拟现实主机检测第一手型对象是否与虚拟物品相交的时机此作限定。
虚拟现实主机检测第一手型对象是否与虚拟物品相交,包括:接收输入设备发送的输入信号,输入信号是根据第一手型对象对应的真实手在现实环境中的运动情况所采集的信号;根据输入信号确定第一手型对象在虚拟环境中的手型位置;检测手型位置是否与虚拟物品在虚拟环境中的物品位置相重叠;若手型位置与物品位置相重叠,则确定第一手型对象与虚拟物品相交。
其中,输入信号、手型位置、射线位置和物品坐标集的相关描述与步骤202中的描述相同,本实施例在此不作赘述。
检测手型位置是否与虚拟物品在虚拟环境中的物品位置相重叠是指:检测手型位置对应的坐标集与物品位置对应的坐标集是否存在交集。
请参考图14,假设第一手型对象1401接触了虚拟物品1402,则头戴式显示器显示第 三手型对象1403。
结合上述实施例可知,VR系统中开机并显示了第一手型对象开始,存在三种手型对象切换逻辑,分别为:第一种,虚拟环境中虚拟手接触虚拟物品时,由第一手型对象切换至第三手型对象;第二种,虚拟环境中射线与虚拟物品相交时,由第一手型对象切换至第二手型对象;第三种,虚拟环境中虚拟手既没有接触虚拟物品,射线也没有与虚拟物品相交时,保持第一手型对象不变。之后,头戴式显示器在显示每一帧虚拟环境中的虚拟手之前,都需要通过虚拟现实主机判断是否切换手型对象。
可选地,虚拟现实主机中预设有三种切换逻辑的判断优先级,根据优先级由高到低的顺序依次判断是否切换手型对象。本实施例不对该优先级的设置方式作限定。
请参考图15,其示出了本申请另一个实施例提供的用于虚拟现实场景中的手型显示方法的流程图。本实施例以该用于虚拟现实场景中的手型显示方法应用于图1所示的VR系统中来举例说明。假设虚拟现实主机中预设的优先级为:第一种切换逻辑高于第二种切换逻辑高于第三种切换逻辑。在步骤201之后,该方法还包括如下几个步骤:
步骤1501,检测当前的手型对象是否与虚拟物品相交,若是,则执行步骤1502;若否,则执行步骤1506。
虚拟现实主机检测当前的手型对象在三维虚拟空间中的坐标,与虚拟物品在在三维虚拟空间中的坐标是否存在交集,若存在交集,则说明当前的手型对象与虚拟物品相交;若不存在交集,则说明当前的手型对象没有与虚拟物品相交。
当前的手型对象可以是默认的第一手型对象,也可以是上一帧虚拟环境中的手势对象,本实施例对此不作限定。
步骤1502,检测是否接收到选择指令,若是,则执行步骤1503;若否,则执行步骤1504。
步骤1503,显示第三手型对象。
步骤1504,检测是否接收到放置指令,若是,则执行步骤1514;若否,则执行步骤1505。
步骤1505,若当前的手型对象为第一手型对象或第二手型对象,则检测当前的手型对象中沿手指前方延伸的射线是否与虚拟物品相交,若是,则执行步骤1506;若否,则执行步骤1510。
虚拟现实主机检测第一手型对象或第二手型对象中的射线在三维虚拟空间中的坐标,与虚拟物品在在三维虚拟空间中的坐标是否存在交集,若存在交集,则说明当前的手型对象中沿手指前方延伸的射线与虚拟物品相交;若不存在交集,则说明当前的手型对象中沿手指前方延伸的射线没有与虚拟物品相交。
若当前的手型对象为第一手型对象,则该射线隐藏显示;若当前的手型对象为第二手型对象,则该射线显式显示。
步骤1506,检测是否接收到选择指令,若是,则执行步骤1507;若否,则执行步骤1508。
步骤1507,显示第三手型对象,执行步骤1509。
步骤1508,显示第二手型对象。
步骤1509,检测是否接收到放置指令,若是,则执行步骤1514;若否,则执行步骤1510。
步骤1510,检测是否接收到按压指令,若是,则执行步骤1511;若否,则执行步骤1512。
步骤1511,显示第四手型对象。
步骤1512,检测是否接收到松开指令,若是,则执行步骤1514;若否,则执行步骤1513。
步骤1513,显示第五手型对象。
步骤1514,显示第一手型对象。
可选地,头戴式显示器显示每帧虚拟环境时,虚拟现实主机都需要执行上述步骤。
可选地,头戴式显示器也可以直接执行步骤1501,而不执行步骤201,本实施例对此不作限定。
可选地,在本实施例中,还提供有虚拟场景中的对象处理方法。该方法可以应用于如图16所示的由服务器1602和终端1604所构成的硬件环境中。如图16所示,服务器1602通过网络与终端1604进行连接,上述网络包括但不限于:广域网、城域网或局域网,终端1604并不限定于PC、手机、平板电脑等。本申请实施例的虚拟场景中的对象处理方法可以由服务器1602来执行,也可以由终端1604来执行,还可以是由服务器1602和终端1604共同执行。其中,终端1604执行本申请实施例的虚拟场景中的对象处理方法也可以是由安装在其上的客户端来执行。
可选地,服务器1602和终端1604可以统称为虚拟现实主机。
图17是根据本申请实施例的一种可选的虚拟场景中的对象处理方法的流程图,如图17所示,该方法可以包括以下步骤:
步骤S1702,检测对真实场景中的第一目标对象执行的第一操作;
步骤S1704,响应于第一操作在虚拟场景中生成第二目标对象对应的至少一个第一菜单对象,其中,第二目标对象为第一目标对象在虚拟场景中所对应的虚拟对象;
步骤S1706,检测对第一目标对象执行的第二操作,其中,第二操作用于指示在虚拟场景中将第二目标对象移动至至少一个第一菜单对象中的目标菜单对象所在的位置;
步骤S1708,响应于第二操作在虚拟场景中执行目标处理操作,其中,目标处理操作为目标菜单对象对应的处理操作,至少一个第一菜单对象中的每个第一菜单对象对应一种处理操作。
上述步骤S1702至步骤S1708,通过检测对第一目标对象执行的第一操作,然后根据检测到的第一操作,在虚拟场景中生成与第一目标对象对应的第二目标对象所对应的多个第一菜单对象,再检测对第一目标对象执行的第二操作,并根据检测到的第二操作,指示虚拟场景中的第二目标对象移动至第一菜单对象中目标菜单对象所在位置,在虚拟场景中的第二目标对象移动至目标对象所在位置的情况下,在虚拟场景中执行目标处理操作,从而无需模拟鼠标,将针对3D空间坐标转换为2D空间位置来执行操作,可以解决相关技术采用发射射线的方式定位虚拟场景中的2D菜单面板中的菜单选项,导致虚拟场景中的菜单选择操作比较复杂的技术问题,进而达到使用3D空间坐标直接执行操作,进而使得对虚拟场景中的菜单选择操作更加简便的技术效果。
在步骤S1702提供的技术方案中,第一目标对象可以为真实场景中用于对虚拟场景进行控制的设备对象,例如,第一目标对象可以是真实场景中的游戏手柄,遥控器等。
可选地,第一目标对象也可以称为输入设备。
在真实场景中用户可以对第一目标对象执行第一操作,其中,第一操作可以包括但并 不限于:点击、长按、手势、摇晃等。本申请实施例可以通过检测真实场景中用户对第一目标对象所执行的第一操作,获取到与该第一操作相应的控制指令,其中,该控制指令可以用于控制虚拟场景。例如,在VR游戏应用中,用户可以按动游戏手柄的按键,实现控制虚拟游戏画面中生成菜单选项。或者,在VR视频应用中,用户可以按动遥控器中的案件来控制虚拟视频画面的播放动作。
可选地,本申请实施例可以实时检测对真实场景中的第一目标对象执行的第一操作,这样可以实现及时快速地对第一操作进行响应,进而可以使得对虚拟场景中的对象的处理操作更加及时,以提高用户对虚拟场景的使用体验。
可选地,本申请实施例可以每隔一段时长检测对真实场景中的第一目标对象执行的第一操作,这样可以实现节省虚拟现实主机的处理资源。
在步骤S1704提供的技术方案中,第二目标对象可以是真实场景中的第一目标对象在虚拟场景中对应的虚拟对象。
例如,假设第一目标对象为真实场景中的游戏手柄,则第二目标对象可以是在虚拟场景中的游戏手柄。虚拟场景中的游戏手柄在虚拟场景中的位置可以与真实场景中的游戏手柄在真实场景中的位置相对应,例如,用户控制真实场景中的游戏手柄移动时,虚拟场景中的游戏手柄也会随其移动,且移动方向和移动距离与真实场景中的游戏手柄的移动方向和移动距离相同。
又例如,假设第一目标对象为真实场景中的游戏手柄,则第二目标对象可以是在虚拟场景中的虚拟手。虚拟场景中的虚拟手在虚拟场景中的位置可以与真实场景中的游戏手柄在真实场景中的位置相对应,例如,用户控制真实场景中的游戏手柄移动时,虚拟场景中的虚拟手也会随其移动,且移动方向和移动距离与真实场景中的游戏手柄的移动方向和移动距离相同。
用户在对第一目标对象执行第一操作时,本申请实施例可以对该第一操作进行响应,响应过程可以包括在虚拟场景中生成第二目标对象对应的至少一个第一菜单对象,其中,第一菜单对象可以为用于对虚拟场景进行控制的虚拟菜单。例如,用户在现实场景中按下游戏手柄的菜单控制按键之后,在虚拟场景中的游戏手柄也相应执行按下菜单控制按键的操作,然后在虚拟场景中响应该操作可以在虚拟场景中生成菜单,供用户进行菜单对象的选择,以实现菜单对象所对应的功能。
需要说明的是,本申请实施例对第一菜单对象所对应的功能不做具体限定,第一菜单对象可以对应生成下拉菜单,也可以对应执行某个动作,或者完成某个任务等。
作为一种可选的实施例,步骤S1704响应于第一操作在虚拟场景中生成第二目标对象对应的至少一个第一菜单对象可以包括以下步骤:
步骤S17042,获取检测到第一操作时虚拟场景中当前的目标场景;
步骤S17044,按照预定的虚拟场景与菜单对象的对应关系在虚拟场景中的第二目标对象的周围生成与目标场景相对应的至少一个第一菜单对象。
采用本申请上述实施例,在检测到第一操作时,响应第一操作的过程可以包括:在接收到第一操作时,获取虚拟场景中当前的目标场景,然后可以按照预定的目标场景与菜单对象的对应关系,确定该目标场景所对应的菜单对象,也即在检测到第一操作时虚拟场景当前为目标场景,按照预定关系确定该目标场景对应的菜单对象为第一菜单对象,则在虚 拟场景中可以在第二目标对象的周围生成与目标场景相对应的第一菜单对象供用户选择,从而可以使用户根据生成的第一菜单对象选择相应的菜单选项。
可选地,在不同的目标场景中可以生成与不同目标场景相应的菜单对象,例如在射击类游戏中,生成的相应的菜单对象可以是武器装备选择菜单;在格斗类游戏中,生成的相应的菜单对象可以是技能选择菜单。对于其他目标场景所对应的菜单对象,在此处不再一一举例说明。
作为一种可选的实施例,响应于第一操作在虚拟场景中的第二目标对象的周围生成至少一个第一菜单对象的排列方式本申请实施例不做具体限定,第一菜单对象在第二目标对象周围的排列方式可以包括以下至少之一:
(1)在预定圆周上按照预定间隔生成至少一个第一菜单对象,其中,预定圆周可以为以第二目标对象所在位置为圆心,预定距离为半径所构成的圆周。此处的预定距离以及预定间隔可以根据实际需求设定,此处不做具体限定。
(2)在第二目标对象的预定方向上按照预定排列顺序生成至少一个第一菜单对象,其中,预定方向包括以下至少之一:上方、下方、左方等、右方,预定排列顺序包括以下至少之一:直线排列顺序、曲线排列顺序等。
需要说明的是,上述只是列举了部分排列方式,本申请实施例还可以采用其他排列方式,例如随机排列方式,此处不再一一举例说明。
采用本申请上述实施例,响应第一操作,可以在虚拟场景中以第二目标对象为中心在第二目标对象的周围按照预定圆周均匀排布多个第一菜单对象,其中,将每个第一菜单对象与第二目标对象的预定距离作为预定圆周的半径,相邻两个第一菜单对象之间的按照预定间隔排列。或者,第一菜单对象还可以以直线形或曲线形排列在第二目标对象的上方、下方、左方、右方等方向。该方案在第二目标对象的周围排列多个第一菜单对象,可以使用户在观察到第二目标对象的情况下,方便地控制第二目标对象向所需选择的第一菜单对象的方向上移动,完成对菜单对象的选择。
可选地,第一菜单对象可以使用3D圆球的样式排列在第二目标对象的周围;第一菜单对象还可以使用3D方块的样式排列在第二目标对象周围,第一菜单对象还可以使用其他不同的样式排列在第二目标对象周围,在此处不一一举例说明。
可选地,第一菜单对象在第二目标对象周围,可以采用圆周形式排列,还可以采用其他形式排列,在此处也不再一一举例说明。
在步骤S1706提供的技术方案中,在响应于对第一目标对象执行的第一操作之后,本申请实施例还可以实时检测在真实场景中用户对第一目标对象所执行的第二操作,其中,第二操作可以包括但并不限于移动、滑动等操作。
可选地,在响应于对第一目标对象执行的第一操作之后,本申请实施例还可以每隔一段时长检测在真实场景中用户对第一目标对象所执行的第二操作。
可选地,第二操作可以是在用户对游戏手柄执行第一操作后执行的其他操作,例如,在第一操作是按住游戏手柄按键的情况下,第二操作即为按住游戏手柄移动。第二操作还有多种不同的实现方式,在此不一一赘述。
在检测到对第一目标对象所执行的第二操作之后,本申请实施例可以响应该第二操作,具体可以包括在虚拟场景中控制第二目标对象按照第一目标对象在真实场景中的移动方向 和移动距离进行移动,使得第二目标对象可以移动至虚拟场景中的目标菜单对象所在的位置,其中,目标菜单对象可以是虚拟场景中第二目标对象周围的至少一个第一菜单对象中的任意一个菜单对象,用户可以通过对第一目标对象执行第二操作,控制虚拟场景中的第二目标对象在第二操作的控制下移动至多个第一菜单对象中的某个目标菜单对象。
例如,在用户需要选择某一个目标菜单对象的情况下,用户手持现实场景中的游戏手柄移动,则虚拟场景中的游戏手柄也向相应的方向移动,用户通过控制现实场景中游戏手柄的移动方向来控制虚拟场景中游戏手柄的移动方向,使虚拟场景中的游戏手柄移动到需要选择的目标菜单对象上,完成目标菜单对象的选择。
在步骤S408提供的技术方案中,本申请实施例响应第二操作可以控制虚拟场景中的第二目标对象移动至目标菜单对象,并在第二目标对象移动至目标菜单对象时触发目标菜单对象对应的目标处理操作。需要说明的是,虚拟场景中排列在第二目标对象周围的至少一个第一菜单对象中的每个第一菜单对象均对应一种处理操作,其中,目标菜单对象对应的处理操作为目标处理操作。还需要说明的是,第一菜单对象所对应的处理操作可以包括但并不限于生成第一菜单对象的下拉菜单对象、实现某种功能等。
例如,在射击类游戏中,即虚拟场景为射击游戏环境,在该虚拟场景中,用户控制虚拟游戏手柄移动至换弹匣的目标菜单对象所在位置,则响应用户选择的换弹匣的目标菜单对象对应的目标处理操作,控制虚拟场景中的游戏角色更换武器弹匣,执行目标处理操作。需要说明的是,目标处理操作可以包括多种,例如:执行该菜单对象对应的功能,或者,触发生成该菜单选项的下拉菜单,目标处理操作还包括多种实现方式,在此不一一举例说明。
作为一种可选的实施例,步骤S1708响应于第二操作在虚拟场景中执行目标处理操作可以包括以下至少之一:
在虚拟场景中生成至少一个第二菜单对象,其中,至少一个第二菜单对象为目标菜单对象的下拉菜单对象。
将虚拟场景中的第一场景切换至第二场景,例如游戏场景的切换。
将虚拟场景中操作对象的属性设置为目标属性,例如游戏人物皮肤、武器装备、技能的更新。
控制虚拟场景中的操作对象执行目标任务,例如,游戏人物执行打怪任务。
需要说明的是,目标处理操作并不仅限于上述操作,目标处理操作还可以包括其他操作,此处不再一一举例说明。采用本申请上述实施例,响应于第二操作在虚拟场景中执行目标处理操作,可以根据用户的需求以及不同的第一菜单对象所代表的功能,选择不同的目标处理操作,可以使虚拟场景中菜单能够满足多种使用需求。
作为一种可选的实施例,在响应于第一操作在虚拟场景中生成第二目标对象对应的至少一个第一菜单对象之后,该实施例还可以包括:检测对第一目标对象执行的第三操作;响应于第三操作在虚拟场景中删除至少一个第一菜单对象。
采用本申请上述实施例,在虚拟场景中的第二目标对象周围生成第一菜单对象后,用户还可以控制第一目标对象执行第三操作,其中,第三操作可以包括但并不限于松开按键、点击、长按或者移动等操作。在检测到对第一目标对象所执行的第三操作时,本申请实施例可以控制虚拟场景中的第二目标对象相应执行该第三操作,在虚拟场景中删除第一菜单 对象,也即在虚拟场景中取消显示菜单内容。
例如,在虚拟场景中的游戏手柄周围生成多个第一菜单对象后,用户控制现实场景中的游戏手柄执行第三操作,例如按动游戏手柄中的某个按键,或者摇动游戏手柄上的某个摇杆,或者松开游戏手柄上的案件,则在虚拟场景中的游戏手柄也执行相应的操作,在虚拟场景中删除多个第一菜单对象,使得虚拟场景中取消显示菜单内容。
作为一种可选的实施例,在响应于第二操作在虚拟场景中执行目标处理操作之前,该实施例还可以包括:响应于第二操作在虚拟场景中为目标菜单对象设置标记,其中,标记用于指示第二目标对象移动至目标菜单对象所在的位置。
采用上述实施例,可以在目标菜单对象上设置标记,使虚拟场景中的第二目标对象在移动至目标菜单对象所在位置的过程中,可以使用户明确看到该标记,并在该标记的引导下可以更加方便地进行移动。
例如,用户可以控制虚拟场景中的游戏手柄指向某一目标菜单对象,则目标菜单对象会在标记的作用下放大、亮光、闪烁、或者旋转。需要说明的是,目标菜单对象会在标记的作用下还会有其他表现形式,在此不一一举例说明。
本申请还提供了一种实施例,该实施例提供了一种虚拟现实环境下手部菜单的交互方案。
本申请所描述的应用场景是在VR环境下。对于主机游戏,以及手机游戏等所有2D显示器环境下的游戏,不管是3D还是2D场景的游戏,菜单的制作都是采用2D界面来实现的。这样做的原因是在最终显示是2D显示器的情况下,使菜单不作为游戏场景中应该存在的内容,只是作为用户和游戏内容的连接媒介,使用2D界面来制作菜单,可以直接让菜单的2D面板面向显示器显示方向(也就是虚拟世界中玩家相机方向),从而可以更加快速方便的让用户选择,也不会影响到虚拟世界的游戏逻辑,使菜单相对独立。
在VR环境下,由于用户与主机的交互方式,不再是通过鼠标、键盘这样通过2D界面的位置变化来反向映射到3D空间去操作虚拟世界,而是直接获取用户在真实3D空间的位置,根据这个位置直接对应到虚拟空间的3D位置,这样就不存在原来的通过鼠标操作,如2D屏幕空间——3D虚拟空间这样的对应关系。
使用3D的方式来展示菜单,可以将菜单作为虚拟世界的3D对象直接融入到虚拟世界场景里面去,让用户可以更加方便、直观的操作,VR游戏的代入感也会变的更强。
本申请主要描述的是的表现和逻辑实现。当用户呼出菜单的时候,菜单每一个选项作为一个3D对象按照一定的排布算法出现在手部的周围,然后用户通过预先定义的行为去触发菜单选项,选中的菜单项触发对应的功能,并且在触发后整个菜单消失。
本申请提供了一种VR环境下手部3D对象菜单。在用户呼出菜单的情况下,菜单中每一个选项作为一个3D对象按照一定的排布算法出现在手部的周围,然后用户通过预先定义的行为去触发菜单选项,选中的菜单项触发对应的功能,并且在触发后整个菜单消失。
本申请提供的手部3D对象菜单可以作为VR应用中的快捷菜单,特别是在特定的游戏场景中,每一个菜单选项都可以真实的存在于游戏场景中,让用户不会因为菜单的出现而影响游戏的沉浸式体验,又可以使用户可以快速的选择相应的功能。
图18是根据本申请实施例的一种可选的虚拟现实环境下手部菜单的示意图,如图18所示,展示的是一个自制的手柄模型,该手柄模型存在于游戏环境的虚拟3D空间中,其中, 手柄在虚拟空间位置与手柄在真实空间位置一一对应。用户通过控制手中的真实空间中的手柄来控制虚拟空间中的手柄。
可选地,游戏手柄可以是Vive手柄、Oculus Touch手柄、或相应的双手分离式的VR手柄。
需要说明的是,手柄无论在虚拟的空间的任何位置,当用户按下实体手柄的菜单键的情况下(可以是按下Vive手柄中Menu按键的情况下,或按下Oculus Touch手柄中A/X按键的情况下),虚拟空间中需要弹出的菜单对象从手柄所在位置诞生,然后移动到预定计算得出的目标位置。
可选地,该目标位置可以是手柄所在位置周围的一个相对位置。
图19是根据本申请实施例的另一种可选的虚拟现实环境下手部菜单的示意图,如图19所示,该目标位置可以将按下按钮的瞬间手柄在虚拟空间的位置作为原始位置,并以该原始位置作为圆心,以20cm为半径,建立圆平面,并在该圆平面的周围按照等间距排列多个选择对象,其中,圆平面的法向量朝向虚拟世界的相机,即圆平面的法向量朝向用户观看的方向。
在本申请中提供的方案中,虚拟空间的手柄是一个3D模型,出现在手柄周围的每一个菜单选项也是3D模型。虚拟空间的手柄位置和真实空间的手柄位置一一对应,虚拟空间中的菜单选项所在位置也与真实空间的位置一一对应。
在本申请中提供的方案中,用户按下菜单按钮的瞬间触发多个菜单选项,使多个菜单选项出现在虚拟世界中的某个固定位置,这些3D模型表示的虚拟菜单所在的位置是不会再发生改变的,直到这一次的交互过程结束菜单消失。
可选地,在用户按下菜单键弹出菜单后,需要用户一直保持菜单键处于按住状态,若用户松开菜单键,则表示整个菜单交互过程结束。
需要说明的是,菜单在交互过程中,菜单选项的触发等其他操作行为与手柄的按键无关,菜单键控制的就是整个菜单在交互过程的出现和消失。
在本申请提供的方案中,在任意时间段内,虚拟空间中的手柄在虚拟空间中的位置都是与真实空间中的手柄在真实空间位置一一对应的。因此在这些虚拟菜单选项出现的情况下,设定用户手柄去触碰这些虚拟的菜单对象的情况,就是触发这些菜单选项的标示所对应的功能。在手柄的移动位置与菜单选项发生碰撞的情况下,触发菜单功能。
图20是根据本申请实施例的一种可选的虚拟现实环境下手部菜单的示意图,如图20所示,手柄在移动过程中,还可以设计让菜单选项的大小随着手柄与菜单选项的距离接近而变大,让菜单选项的大小随着手柄与菜单选项的距离变远而变小,从而可以根据该提示,引导用户去触碰这些菜单选项。
可选地,在用户触碰到菜单对象的情况下,触发菜单对应的功能,该功能可以是设定的游戏逻辑以及关闭菜单,还可以是打开下一级的菜单。
图21是根据本申请实施例的一种可选的菜单控制逻辑的示意图,如图21所示,在发出【菜单触发】的请求后,进入【单层菜单逻辑】,根据【单层菜单逻辑】执行相应的菜单逻辑,并对执行的结果进行【反馈】,再根据【反馈】的结果,选择执行【菜单消失】或开启【二级菜单】,若执行【二级菜单】,则返回【单层菜单逻辑】执行第二次菜单逻辑。
可选地,开启一次菜单交互的方式可以包括【菜单触发】和【二级菜单】的方式,【菜 单触发】表示用户按下菜单按键;【二级菜单】表示用户触发的菜单选项进一步开启了新的二级菜单选项,在这种情况下,上一级的菜单选项消失完成上一次的菜单交互,同时生成新的菜单选项开启当前菜单交互。
可选地,关闭一次菜单交互的方式同样可以包括【菜单触发】和【菜单消失】的方式,【菜单触发】表示的是用户移动手柄去触碰了菜单选项中的一个,在这种情况下,无论是开启下一级菜单还是执行菜单选项的预置逻辑,在这之前的第一步都是销毁这一次的所有的菜单选项。【菜单消失】表示的是用户在这一次菜单交互中没有去碰撞任意一个选项,而是松开了菜单按钮,这时候会触发结束当前【单层菜单逻辑】的行为。
需要说明的是【单层菜单逻辑】内部,是一次菜单交互的执行逻辑。
可选地,【单层菜单逻辑】内包括初始化阶段、以及执行阶段。
开启一次菜单交互后,首先进入初始化阶段,生成菜单选项。
可选地,根据当前的游戏环境确定当前要弹出的手部菜单的内容,实现的方法可以是,在手部菜单内存储一个变量来标识当前菜单的类型,并针对每一种类型预先定义手部菜单须要弹出的内容。在用户按下菜单按钮的情况下,检查当前游戏环境,确定这个菜单类型变量的值,作为创建当前菜单选项的参数,由此实现【菜单内容生成】。
初始化第二步是【确定菜单空间位置】,实现方法可以是根据当前【手柄位置】,通过算法来让所有的菜单选项排列在按下菜单瞬间的手柄位置的周围。
可选地,该算法可以是:将按下按钮的瞬间手柄在虚拟空间的位置作为原始位置,并以该原始位置作为圆心,以20cm为半径,建立圆平面,并在该圆平面的周围按照等间距排列多个选择对象,其中,圆平面的法向量朝向虚拟世界的相机,即圆平面的法向量朝向用户观看的方向。并且存储每一个菜单选项的碰撞体以及所在位置。
然后进入执行阶段,对于虚拟空间中每一帧影像都要执行的逻辑是:获得当前虚拟手柄的【手柄位置】,然后去执行下一步判断当前虚拟手柄的【手柄位置】是否满足结束条件,若不满足结束条件,则继续循环这一步。
需要说明的是结束条件可以包括以下情况,例如:用户松开菜单按钮;当前的虚拟手柄的【手柄位置】与任意一个菜单选项的发生碰撞。只要满足上述的任意一个条件,都会结束当前一次菜单交互,完成【单层逻辑菜单】。
本申请上述实施例,可以在展示上更加的契合用户心里,通过虚拟的手来触碰虚拟物件对象同时给你视觉听觉触觉的反馈,让用户的感觉在VR环境下更加的真实。
本申请上述实施例,可以在VR环境下让用户快速选择菜单。
在本申请上述实施例中,只要是双手分离式的手柄,且手柄可以获得3D空间的位置的都可以得到本申请所提供方案的支持。
在本申请上述实施例中,菜单按键可以随意调整。
在本申请上述实施例中,菜单选项出现位置的算法可以是多样的,采用上述实施例中的算法可以更加符合用户的选择习惯进行选项。
在本申请上述实施例中,按键一直要按住才可以显示菜单的方案还可以采用其他方式实现,例如,按一下按键,则开始执行菜单交互,再次按一下按键,则结束菜单交互。
在本申请上述实施例中,在手柄移动过程中的菜单反应动画也是多样的,在此不再一一赘述。
需要说明的是,对于前述的各方法实施例,为了简单描述,故将其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本申请并不受所描述的动作顺序的限制,因为依据本申请,某些步骤可以采用其他顺序或者同时进行。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到根据上述实施例的方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端设备(可以是手机,计算机,服务器,或者网络设备等)执行本申请各个实施例所述的虚拟场景中的对象处理方法。
请参考图22,其示出了本申请一个实施例提供的用于虚拟现实场景中的手型显示装置的框图。该用于虚拟现实场景中的手型显示装置可以通过软件、硬件或者两者的结合实现成为VR设备的全部或一部分,该用于虚拟现实场景中的手型显示装置包括:第一显示模块2210、第二显示模块2220和第三显示模块2230。
第一显示模块2210,用于实现步骤201有关的显示功能;
第二显示模块2220,用于实现步骤202有关的显示功能;
第三显示模块2230,用于实现步骤203有关的显示功能。
可选地,该装置还包括:第一接收模块、第一确定模块、第二确定模块、第一检测模块和第三确定模块。
第一接收模块,用于接收输入设备发送的输入信号,输入信号是根据第一手型对象对应的真实手在现实环境中的运动情况所采集的信号;
第一确定模块,用于根据输入信号确定第一手型对象在虚拟环境中的手型位置;
第二确定模块,用于根据手型位置确定射线在虚拟环境中的射线位置;
第一检测模块,用于检测射线位置是否与虚拟物品在虚拟环境中的物品位置相重叠;
第三确定模块,用于当射线位置与物品位置相重叠时,确定射线与虚拟物品相交。
可选地,第三显示模块2230,还用于:当第一手型对象与虚拟物品相交且接收到选择指令时,显示第三手型对象。
可选地,该装置还包括:第二接收模块、第四确定模块、第二检测模块和第五确定模块。
第二接收模块,用于接收输入设备发送的输入信号,输入信号是根据第一手型对象对应的真实手在现实环境中的运动情况所采集的信号;
第四确定模块,用于根据输入信号确定第一手型对象在虚拟环境中的手型位置;
第二检测模块,用于检测手型位置是否与虚拟物品在虚拟环境中的物品位置相重叠;
第五确定模块,用于当手型位置与物品位置相重叠时,确定第一手型对象与虚拟物品相交。
可选地,第三显示模块2230,还用于:根据虚拟物品的类型,显示与类型对应的第三手型对象。
可选地,该装置还包括:第四显示模块。
第四显示模块,用于当根据输入设备发送的输入信号确定射线与虚拟物品相交时,以预设显示方式显示虚拟物品。
可选地,第一显示模块2210,还用于:当接收到放置指令时,显示第一手型对象。
可选地,该装置还包括:第五显示模块。
第五显示模块,用于当接收到预定按键对应的按压指令时,显示第四手型对象,第四手型对象是与预定按键对应的手指呈弯曲状态的手型对象。
可选地,该装置还包括:第六显示模块。
第六显示模块,用于当接收到预定按键对应的松开指令时,显示第五手型对象,第五手型对象是与预定按键对应的手指呈舒展状态的手型对象。
具体地,详见上述的用于虚拟现实场景中的手型显示方法的实施例。
根据本申请实施例,还提供了一种用于实施上述虚拟场景中的对象处理方法的虚拟场景中的对象处理装置。图23是根据本申请实施例的一种可选的虚拟场景中的对象处理装置的示意图,如图23所示,该装置可以包括:
第一检测单元231,用于检测对真实场景中的第一目标对象执行的第一操作;第一响应单元233,用于响应于第一操作在虚拟场景中生成第二目标对象对应的至少一个第一菜单对象,其中,第二目标对象为第一目标对象在虚拟场景中所对应的虚拟对象;第二检测单元235,用于检测对第一目标对象执行的第二操作,其中,第二操作用于指示在虚拟场景中将第二目标对象移动至至少一个第一菜单对象中的目标菜单对象所在的位置;第二响应单元237,用于响应于第二操作在虚拟场景中执行目标处理操作,其中,目标处理操作为目标菜单对象对应的处理操作,至少一个第一菜单对象中的每个第一菜单对象对应一种处理操作。
需要说明的是,该实施例中的第一检测单元231可以用于执行本申请实施例中的步骤S1702,该实施例中的第一响应单元233可以用于执行本申请实施例中的步骤S1704,该实施例中的第二检测单元235可以用于执行本申请实施例中的步骤S1706,该实施例中的第二响应单元237可以用于执行本申请实施例中的步骤S1708。
此处需要说明的是,上述模块与对应的步骤所实现的示例和应用场景相同,但不限于上述实施例所公开的内容。需要说明的是,上述模块作为装置的一部分可以运行在如图16所示的硬件环境中,可以通过软件实现,也可以通过硬件实现。
作为一种可选的实施例,如图24所示,该实施例还可以包括:第三检测单元238,用于在响应于第一操作在虚拟场景中生成第二目标对象对应的至少一个第一菜单对象之后检测对第一目标对象执行的第三操作;第三响应单元239,响应于第三操作在虚拟场景中删除至少一个第一菜单对象。
作为一种可选的实施例,如图25所示,该实施例还可以包括:第四响应单元236,用于在响应于第二操作在虚拟场景中执行目标处理操作之前,响应于第二操作在虚拟场景中为目标菜单对象设置标记,其中,标记用于指示第二目标对象移动至目标菜单对象所在的位置。
作为一种可选的实施例,如图26所示,第一响应单元233可以包括:获取模块2331,用于获取检测到第一操作时虚拟场景中当前的目标场景;第一生成模块2332,用于按照预定的虚拟场景与菜单对象的对应关系在虚拟场景中的第二目标对象的周围生成与目标场景 相对应的至少一个第一菜单对象。
作为一种可选的实施例,如图27所示,第一响应单元233可以包括以下模块中的至少之一:第二生成模块2333,用于在预定圆周上按照预定间隔生成至少一个第一菜单对象,其中,预定圆周为以第二目标对象所在位置为圆心,预定距离为半径所构成的圆周;第三生成模块2334,用于在第二目标对象的预定方向上按照预定排列顺序生成至少一个第一菜单对象,其中,预定方向包括以下至少之一:上方、下方、左方、右方,预定排列顺序包括以下至少之一:直线排列顺序、曲线排列顺序。
作为一种可选的实施例,如图28所示,第二响应单元237可以包括以下模块中的至少之一:第四生成模块2351,用于在虚拟场景中生成至少一个第二菜单对象,其中,至少一个第二菜单对象为目标菜单对象的下拉菜单对象;切换模块2352,用于将虚拟场景中的第一场景切换至第二场景;设置模块2353,用于将虚拟场景中操作对象的属性设置为目标属性;控制模块2354,用于控制虚拟场景中的操作对象执行目标任务。
上述模块,通过检测对第一目标对象执行的第一操作,然后根据检测到的第一操作,在虚拟场景中与第一目标对象对应的第二目标对象周围生成多个第一菜单对象,再检测对第一目标对象执行的第二操作,并根据检测到的第二操作,指示虚拟场景中的第二目标对象移动至第一菜单对象中目标菜单对象所在位置,在虚拟场景中的第二目标对象移动至目标对象所在位置的情况下,在虚拟场景中执行目标处理操作,从而无需模拟鼠标,将针对3D空间坐标转换为2D空间位置来执行操作,解决了相关技术采用发射射线的方式定位虚拟场景中的2D菜单面板中的菜单选项,导致虚拟场景中的菜单选择操作比较复杂的技术问题,进而达到使用3D空间坐标直接执行操作,进而使得对虚拟场景中的菜单选择操作更加简便的技术效果。
请参考图29,其示出了本申请一个实施例提供的VR系统的结构示意图。该VR系统包括:头戴式显示器120、虚拟现实主机140和输入设备160。
头戴式显示器120是用于佩戴在用户头部进行图像显示的显示器。
头戴式显示器120通过柔性电路板或硬件接口与虚拟现实主机140电性相连。
虚拟现实主机140通常集成在头戴式显示器120的内部。虚拟现实主机140包括处理器142和存储器144。存储器144是用于存储诸如计算机可读指令、数据结构、程序模块或其他数据等信息的任何方法或技术实现的易失性和非易失性、可移动和不可移动介质,比如RAM、ROM、EPROM、EEPROM、闪存或其他固态存储其技术,CD-ROM、DVD或其他光学存储、磁带盒、磁带、磁盘存储或其他磁性存储设备。存储器144存储有一个或一个以上的程序指令,该程序指令包括用于实现上述各个方法实施例所提供的用于虚拟现实场景中的手型显示方法的指令;或者,实现上述各个方法实施例所提供的虚拟场景中的对象处理方法的指令。处理器142用于执行存储器144中的指令,来实现上述各个方法实施例所提供的用于虚拟现实场景中的手型显示方法;或者,实现上述各个方法实施例所提供的虚拟场景中的对象处理方法。虚拟现实主机140通过线缆、蓝牙连接或Wi-Fi(Wireless-Fidelity,无线保真)连接与输入设备160相连。
输入设备160是体感手套、体感手柄、遥控器、跑步机、鼠标、键盘、人眼聚焦设备等输入外设。
本申请提供了一种计算机可读存储介质,所述存储介质中存储有至少一条指令,所述至少一条指令由所述处理器加载并执行以实现上述各个方法实施例提供的用于虚拟现实场景中的手型显示方法。
本申请还提供了一种计算机程序产品,当计算机程序产品在计算机上运行时,使得计算机执行上述各个方法实施例提供的用于虚拟现实场景中的手型显示方法。
根据本申请实施例,还提供了一种用于实施上述虚拟场景中的对象处理方法的终端。
图30是根据本申请实施例的一种终端的结构框图,如图30所示,该终端可以包括:一个或多个(图中仅示出一个)处理器3001、存储器3003、以及传输装置3005,如图30所示,该终端还可以包括输入输出设备3007。
其中,存储器3003可用于存储软件程序以及模块,如本申请实施例中的虚拟场景中的对象处理方法和装置对应的程序指令/模块;或者,用于虚拟现实场景中的手型显示方法和装置对应的程序指令/模块。处理器3001通过运行存储在存储器3003内的软件程序以及模块,从而执行各种功能应用以及数据处理,即实现上述的虚拟场景中的对象处理方法;或者,用于虚拟现实场景中的手型显示方法。
存储器3003可包括高速随机存储器,还可以包括非易失性存储器,如一个或者多个磁性存储装置、闪存、或者其他非易失性固态存储器。在一些实例中,存储器3003可进一步包括相对于处理器3001远程设置的存储器,这些远程存储器可以通过网络连接至终端。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。
上述的传输装置3005用于经由一个网络接收或者发送数据。上述的网络具体实例可包括有线网络及无线网络。在一个实例中,传输装置3005包括一个网络适配器(Network Interface Controller,NIC),其可通过网线与其他网络设备与路由器相连从而可与互联网或局域网进行通讯。在一个实例中,传输装置3005为射频(Radio Frequency,RF)模块,其用于通过无线方式与互联网进行通讯。
其中,具体地,存储器3003用于存储应用程序。
处理器3001可以调用存储器3003存储的应用程序,以执行下述步骤:检测对真实场景中的第一目标对象执行的第一操作;响应于第一操作在虚拟场景中生成第二目标对象对应的至少一个第一菜单对象,其中,第二目标对象为第一目标对象在虚拟场景中所对应的虚拟对象;检测对第一目标对象执行的第二操作,其中,第二操作用于指示在虚拟场景中将第二目标对象移动至至少一个第一菜单对象中的目标菜单对象所在的位置;响应于第二操作在虚拟场景中执行目标处理操作,其中,目标处理操作为目标菜单对象对应的处理操作,至少一个第一菜单对象中的每个第一菜单对象对应一种处理操作。
处理器3001还用于执行下述步骤:检测对第一目标对象执行的第三操作;响应于第三操作在虚拟场景中删除至少一个第一菜单对象。
处理器3001还用于执行下述步骤:响应于第二操作在虚拟场景中为目标菜单对象设置标记,其中,标记用于指示第二目标对象移动至目标菜单对象所在的位置。
处理器3001还用于执行下述步骤:获取检测到第一操作时虚拟场景中当前的目标场景; 按照预定的虚拟场景与菜单对象的对应关系在虚拟场景中的第二目标对象的周围生成与目标场景相对应的至少一个第一菜单对象。
处理器3001还用于执行下述步骤:在预定圆周上按照预定间隔生成至少一个第一菜单对象,其中,预定圆周为以第二目标对象所在位置为圆心,预定距离为半径所构成的圆周;在第二目标对象的预定方向上按照预定排列顺序生成至少一个第一菜单对象,其中,预定方向包括以下至少之一:上方、下方、左方、右方,预定排列顺序包括以下至少之一:直线排列顺序、曲线排列顺序。
处理器3001还用于执行下述步骤:在虚拟场景中生成至少一个第二菜单对象,其中,至少一个第二菜单对象为目标菜单对象的下拉菜单对象;将虚拟场景中的第一场景切换至第二场景;将虚拟场景中操作对象的属性设置为目标属性;控制虚拟场景中的操作对象执行目标任务。
采用本申请实施例,提供了一种虚拟场景中的对象处理方案。通过检测对第一目标对象执行的第一操作,然后根据检测到的第一操作,在虚拟场景中与第一目标对象对应的第二目标对象周围生成多个第一菜单对象,再检测对第一目标对象执行的第二操作,并根据检测到的第二操作,指示虚拟场景中的第二目标对象移动至第一菜单对象中目标菜单对象所在位置,在虚拟场景中的第二目标对象移动至目标对象所在位置的情况下,在虚拟场景中执行目标处理操作,从而无需模拟鼠标,将针对3D空间坐标转换为30D空间位置来执行操作,解决了相关技术采用发射射线的方式定位虚拟场景中的30D菜单面板中的菜单选项,导致虚拟场景中的菜单选择操作比较复杂的技术问题,进而达到使用3D空间坐标直接执行操作,进而使得对虚拟场景中的菜单选择操作更加简便的技术效果。
本领域普通技术人员可以理解,图30所示的结构仅为示意,终端可以是智能手机(如Android手机、iOS手机等)、平板电脑、掌上电脑以及移动互联网设备(Mobile Internet Devices,MID)、PAD等终端设备。图30其并不对上述电子装置的结构造成限定。例如,终端还可包括比图30中所示更多或者更少的组件(如网络接口、显示装置等),或者具有与图30所示不同的配置。
本领域普通技术人员可以理解上述实施例的各种方法中的全部或部分步骤是可以通过程序来指令终端设备相关的硬件来完成,该程序可以存储于一计算机可读存储介质中,存储介质可以包括:闪存盘、只读存储器(Read-Only Memory,ROM)、随机存取器(Random Access Memory,RAM)、磁盘或光盘等。
本申请的实施例还提供了一种存储介质。可选地,在本实施例中,上述存储介质可以用于执行虚拟场景中的对象处理方法的程序代码。所述存储介质中存储有至少一条指令,所述至少一条指令由所述处理器加载并执行以实现上述各个方法实施例提供的虚拟场景中的对象处理方法。
本申请还提供了一种计算机程序产品,当计算机程序产品在计算机上运行时,使得计算机执行上述各个方法实施例提供的虚拟场景中的对象处理方法。
可选地,在本实施例中,上述存储介质可以位于图16所示的网络中的多个网络设备中的至少一个网络设备上。
可选地,在本实施例中,存储介质被设置为存储用于执行以下步骤的程序代码:
S1,检测对真实场景中的第一目标对象执行的第一操作;
S2,响应于第一操作在虚拟场景中生成第二目标对象对应的至少一个第一菜单对象,其中,第二目标对象为第一目标对象在虚拟场景中所对应的虚拟对象;
S3,检测对第一目标对象执行的第二操作,其中,第二操作用于指示在虚拟场景中将第二目标对象移动至至少一个第一菜单对象中的目标菜单对象所在的位置;
S4,响应于第二操作在虚拟场景中执行目标处理操作,其中,目标处理操作为目标菜单对象对应的处理操作,至少一个第一菜单对象中的每个第一菜单对象对应一种处理操作。
可选地,存储介质还被设置为存储用于执行以下步骤的程序代码:检测对第一目标对象执行的第三操作;响应于第三操作在虚拟场景中删除至少一个第一菜单对象。
可选地,存储介质还被设置为存储用于执行以下步骤的程序代码:响应于第二操作在虚拟场景中为目标菜单对象设置标记,其中,标记用于指示第二目标对象移动至目标菜单对象所在的位置。
可选地,存储介质还被设置为存储用于执行以下步骤的程序代码:获取检测到第一操作时虚拟场景中当前的目标场景;按照预定的虚拟场景与菜单对象的对应关系在虚拟场景中的第二目标对象的周围生成与目标场景相对应的至少一个第一菜单对象。
可选地,存储介质还被设置为存储用于执行以下步骤的程序代码:在预定圆周上按照预定间隔生成至少一个第一菜单对象,其中,预定圆周为以第二目标对象所在位置为圆心,预定距离为半径所构成的圆周;在第二目标对象的预定方向上按照预定排列顺序生成至少一个第一菜单对象,其中,预定方向包括以下至少之一:上方、下方、左方、右方,预定排列顺序包括以下至少之一:直线排列顺序、曲线排列顺序。
可选地,存储介质还被设置为存储用于执行以下步骤的程序代码:在虚拟场景中生成至少一个第二菜单对象,其中,至少一个第二菜单对象为目标菜单对象的下拉菜单对象;将虚拟场景中的第一场景切换至第二场景;将虚拟场景中操作对象的属性设置为目标属性;控制虚拟场景中的操作对象执行目标任务。
可选地,本实施例中的具体示例可以参考上述虚拟场景中的对象处理方法的实施例所描述的示例,本实施例在此不再赘述。
可选地,在本实施例中,上述存储介质可以包括但不限于:U盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、移动硬盘、磁碟或者光盘等各种可以存储程序代码的介质。
上述实施例中的集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在上述计算机可读取的存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在存储介质中,包括若干指令用以使得一台或多台计算机设备(可为个人计算机、服务器或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。
在本申请所提供的几个实施例中,应该理解到,所揭露的客户端,可通过其它的方式实现。其中,以上所描述的装置实施例仅仅是示意性的,例如所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互 之间的耦合或直接耦合或通信连接可以是通过一些接口,单元或模块的间接耦合或通信连接,可以是电性或其它的形式。
本领域普通技术人员可以理解实现上述实施例的全部或部分步骤可以通过硬件来完成,也可以通过程序来指令相关的硬件完成,所述的程序可以存储于一种计算机可读存储介质中,上述提到的存储介质可以是只读存储器,磁盘或光盘等。
以上所述仅为本申请的较佳实施例,并不用以限制本申请,凡在本申请的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本申请的保护范围之内。

Claims (20)

  1. 一种用于虚拟现实场景中的手型显示方法,其特征在于,用于虚拟现实主机中,所述方法包括:
    显示第一手型对象,所述第一手型对象包括有沿手指前方延伸且隐藏显示的射线,所述第一手型对象是指虚拟手未握持虚拟物品或未指示待握持的虚拟物品时的姿势动画,所述虚拟物品是所述虚拟手能拿起、握持、放下的虚拟物品;
    当根据输入设备发送的输入信号确定所述射线与所述虚拟物品相交时,显示第二手型对象,所述第二手型对象包括有沿手指前方延伸且显式显示的射线;
    当接收到选择指令时,显示第三手型对象,所述第三手型对象是握持所述虚拟物品的手型对象。
  2. 根据权利要求1所述的方法,其特征在于,所述当根据输入设备发送的输入信号确定所述射线与所述虚拟物品相交时,显示第二手型对象之前,还包括:
    接收输入设备发送的输入信号,所述输入信号是根据所述第一手型对象对应的真实手在现实环境中的运动情况所采集的信号;
    根据所述输入信号确定所述第一手型对象在虚拟环境中的手型位置;
    根据所述手型位置确定所述射线在所述虚拟环境中的射线位置;
    检测所述射线位置是否与所述虚拟物品在所述虚拟环境中的物品位置相重叠;
    若所述射线位置与所述物品位置相重叠,则确定所述射线与所述虚拟物品相交。
  3. 根据权利要求1所述的方法,其特征在于,所述显示第一手型对象之后,还包括:
    当所述第一手型对象与所述虚拟物品相交且接收到所述选择指令时,显示所述第三手型对象。
  4. 根据权利要求3所述的方法,其特征在于,所述当所述第一手型对象与所述虚拟物品相交且接收到所述选择指令时,显示所述第三手型对象之前,还包括:
    接收输入设备发送的输入信号,所述输入信号是根据所述第一手型对象对应的真实手在现实环境中的运动情况所采集的信号;
    根据所述输入信号确定所述第一手型对象在虚拟环境中的手型位置;
    检测所述手型位置是否与所述虚拟物品在所述虚拟环境中的物品位置相重叠;
    若所述手型位置与所述物品位置相重叠,则确定所述第一手型对象与所述虚拟物品相交。
  5. 根据权利要求1至4任一所述的方法,其特征在于,所述显示所述第三手型对象,包括:
    根据所述虚拟物品的类型,显示与所述类型对应的第三手型对象。
  6. 根据权利要求1至4任一所述的方法,其特征在于,所述方法还包括:
    当根据输入设备发送的输入信号确定所述射线与所述虚拟物品相交时,以预设显示方式 显示所述虚拟物品;
    其中,所述预设显示方式区别于所述虚拟物品的原始显示方式。
  7. 根据权利要求1至4任一所述的方法,其特征在于,所述显示所述第三手型对象之后,还包括:
    当接收到放置指令时,显示所述第一手型对象。
  8. 根据权利要求1所述的方法,其特征在于,所述显示第一手型对象之后,还包括:
    当接收到预定按键对应的按压指令时,显示第四手型对象,所述第四手型对象是与所述预定按键对应的手指呈弯曲状态的手型对象。
  9. 根据权利要求8所述的方法,其特征在于,所述显示第四手型对象之后,还包括:
    当接收到所述预定按键对应的松开指令时,显示第五手型对象,所述第五手型对象是与所述预定按键对应的手指呈舒展状态的手型对象。
  10. 一种用于虚拟现实场景中的手型显示装置,其特征在于,所述装置包括:
    第一显示模块,用于显示第一手型对象,所述第一手型对象包括有沿手指前方延伸且隐藏显示的射线,所述第一手型对象是指虚拟手未握持虚拟物品或未指示待握持的虚拟物品时的姿势动画,所述虚拟物品是所述虚拟手能拿起、握持、放下的虚拟物品;
    第二显示模块,用于当根据输入设备发送的输入信号确定所述射线与所述虚拟物品相交时,显示第二手型对象,所述第二手型对象包括有沿手指前方延伸且显式显示的射线;
    第三显示模块,用于当接收到选择指令时,显示第三手型对象,所述第三手型对象是握持所述虚拟物品的手型对象。
  11. 根据权利要求10所述的装置,其特征在于,所述装置还包括:
    第一接收模块,用于接收输入设备发送的输入信号,所述输入信号是根据所述第一手型对象对应的真实手在现实环境中的运动情况所采集的信号;
    第一确定模块,用于根据所述输入信号确定所述第一手型对象在虚拟环境中的手型位置;
    第二确定模块,用于根据所述手型位置确定所述射线在所述虚拟环境中的射线位置;
    第一检测模块,用于检测所述射线位置是否与所述虚拟物品在所述虚拟环境中的物品位置相重叠;
    第三确定模块,用于当所述射线位置与所述物品位置相重叠时,确定所述射线与所述虚拟物品相交。
  12. 根据权利要求10所述的装置,其特征在于,所述第三显示模块,还用于:
    在显示第一手型对象之后,当所述第一手型对象与所述虚拟物品相交且接收到所述选择指令时,显示所述第三手型对象。
  13. 根据权利要求12所述的装置,其特征在于,所述装置还包括:
    第二接收模块,用于当所述第一手型对象与所述虚拟物品相交且接收到所述选择指令时,在显示所述第三手型对象之前,接收输入设备发送的输入信号,所述输入信号是根据所述第一手型对象对应的真实手在现实环境中的运动情况所采集的信号;
    第四确定模块,用于根据所述输入信号确定所述第一手型对象在虚拟环境中的手型位置;
    第二检测模块,用于检测所述手型位置是否与所述虚拟物品在所述虚拟环境中的物品位置相重叠;
    第五确定模块,用于当所述手型位置与所述物品位置相重叠时,确定所述第一手型对象与所述虚拟物品相交。
  14. 根据权利要求10至13任一所述的装置,其特征在于,所述第三显示模块,还用于:
    根据所述虚拟物品的类型,显示与所述类型对应的第三手型对象。
  15. 根据权利要求10至13任一所述的装置,其特征在于,所述装置还包括:
    第四显示模块,用于当根据输入设备发送的输入信号确定所述射线与所述虚拟物品相交时,以预设显示方式显示所述虚拟物品。
  16. 根据权利要求10至13任一所述的装置,其特征在于,所述第一显示模块,还用于:
    当接收到放置指令时,显示所述第一手型对象。
  17. 根据权利要求10所述的装置,其特征在于,所述装置还包括:
    第五显示模块,用于当接收到预定按键对应的按压指令时,显示第四手型对象,所述第四手型对象是与所述预定按键对应的手指呈弯曲状态的手型对象。
  18. 根据权利要求17所述的装置,其特征在于,所述装置还包括:
    第六显示模块,用于当接收到所述预定按键对应的松开指令时,显示第五手型对象,所述第五手型对象是与所述预定按键对应的手指呈舒展状态的手型对象。
  19. 一种虚拟显示主机,所述虚拟现实主机包括处理器和存储器,所述存储器中存储有至少一条指令,所述至少一条指令由所述处理器加载并执行以实现权利要求1至9任一所述的用于虚拟现实场景中的手型显示方法。
  20. 一种计算机可读存储介质,所述存储介质中存储有至少一条指令,所述至少一条指令由所述处理器加载并执行以实现权利要求1至9任一所述的用于虚拟现实场景中的手型显示方法。
PCT/CN2018/081258 2017-04-25 2018-03-30 用于虚拟现实场景中的手型显示方法及装置 WO2018196552A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/509,038 US11194400B2 (en) 2017-04-25 2019-07-11 Gesture display method and apparatus for virtual reality scene

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN201710278577.2A CN107132917B (zh) 2017-04-25 2017-04-25 用于虚拟现实场景中的手型显示方法及装置
CN201710278577.2 2017-04-25
CN201710292385.7 2017-04-26
CN201710292385.7A CN107168530A (zh) 2017-04-26 2017-04-26 虚拟场景中的对象处理方法和装置

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/509,038 Continuation US11194400B2 (en) 2017-04-25 2019-07-11 Gesture display method and apparatus for virtual reality scene

Publications (1)

Publication Number Publication Date
WO2018196552A1 true WO2018196552A1 (zh) 2018-11-01

Family

ID=63919436

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/081258 WO2018196552A1 (zh) 2017-04-25 2018-03-30 用于虚拟现实场景中的手型显示方法及装置

Country Status (2)

Country Link
US (1) US11194400B2 (zh)
WO (1) WO2018196552A1 (zh)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6924285B2 (ja) * 2018-01-30 2021-08-25 株式会社ソニー・インタラクティブエンタテインメント 情報処理装置
US10922895B2 (en) * 2018-05-04 2021-02-16 Microsoft Technology Licensing, Llc Projection of content libraries in three-dimensional environment
US11335068B2 (en) * 2020-07-27 2022-05-17 Facebook Technologies, Llc. Systems and methods for user interaction with artificial reality environments
WO2022049608A1 (en) * 2020-09-01 2022-03-10 Italdesign-Giugiaro S.P.A. Immersive virtual reality system
CN112462937B (zh) * 2020-11-23 2022-11-08 青岛小鸟看看科技有限公司 虚拟现实设备的局部透视方法、装置及虚拟现实设备
US20230135974A1 (en) * 2021-11-04 2023-05-04 Microsoft Technology Licensing, Llc Multi-factor intention determination for augmented reality (ar) environment control
CN114089836B (zh) * 2022-01-20 2023-02-28 中兴通讯股份有限公司 标注方法、终端、服务器和存储介质
GB2620631A (en) * 2022-07-15 2024-01-17 Sony Interactive Entertainment Inc An information processing apparatus, method, computer program and system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104656878A (zh) * 2013-11-19 2015-05-27 华为技术有限公司 手势识别方法、装置及系统
CN105353873A (zh) * 2015-11-02 2016-02-24 深圳奥比中光科技有限公司 基于三维显示的手势操控方法和系统
CN105912110A (zh) * 2016-04-06 2016-08-31 北京锤子数码科技有限公司 一种在虚拟现实空间中进行目标选择的方法、装置及系统
CN107132917A (zh) * 2017-04-25 2017-09-05 腾讯科技(深圳)有限公司 用于虚拟现实场景中的手型显示方法及装置

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9542010B2 (en) 2009-09-15 2017-01-10 Palo Alto Research Center Incorporated System for interacting with objects in a virtual environment
US8994718B2 (en) 2010-12-21 2015-03-31 Microsoft Technology Licensing, Llc Skeletal control of three-dimensional virtual world
US20130229345A1 (en) * 2012-03-01 2013-09-05 Laura E. Day Manual Manipulation of Onscreen Objects
US20140375539A1 (en) * 2013-06-19 2014-12-25 Thaddeus Gabara Method and Apparatus for a Virtual Keyboard Plane
US10509865B2 (en) * 2014-09-18 2019-12-17 Google Llc Dress form for three-dimensional drawing inside virtual reality environment
US9864461B2 (en) * 2014-09-26 2018-01-09 Sensel, Inc. Systems and methods for manipulating a virtual environment
US10088971B2 (en) * 2014-12-10 2018-10-02 Microsoft Technology Licensing, Llc Natural user interface camera calibration
CN105867599A (zh) 2015-08-17 2016-08-17 乐视致新电子科技(天津)有限公司 一种手势操控方法及装置
CN105975072A (zh) 2016-04-29 2016-09-28 乐视控股(北京)有限公司 识别手势动作的方法、装置及系统
US10019131B2 (en) * 2016-05-10 2018-07-10 Google Llc Two-handed object manipulations in virtual reality
US10509487B2 (en) * 2016-05-11 2019-12-17 Google Llc Combining gyromouse input and touch input for navigation in an augmented and/or virtual reality environment
CN106020633A (zh) 2016-05-27 2016-10-12 网易(杭州)网络有限公司 交互控制方法及装置
CN106249882B (zh) 2016-07-26 2022-07-12 华为技术有限公司 一种应用于vr设备的手势操控方法与装置
CN106445118B (zh) 2016-09-06 2019-05-17 网易(杭州)网络有限公司 虚拟现实交互方法及装置
CN106527702B (zh) 2016-11-03 2019-09-03 网易(杭州)网络有限公司 虚拟现实的交互方法及装置
US10782793B2 (en) * 2017-08-10 2020-09-22 Google Llc Context-sensitive hand interaction

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104656878A (zh) * 2013-11-19 2015-05-27 华为技术有限公司 手势识别方法、装置及系统
CN105353873A (zh) * 2015-11-02 2016-02-24 深圳奥比中光科技有限公司 基于三维显示的手势操控方法和系统
CN105912110A (zh) * 2016-04-06 2016-08-31 北京锤子数码科技有限公司 一种在虚拟现实空间中进行目标选择的方法、装置及系统
CN107132917A (zh) * 2017-04-25 2017-09-05 腾讯科技(深圳)有限公司 用于虚拟现实场景中的手型显示方法及装置

Also Published As

Publication number Publication date
US20190332182A1 (en) 2019-10-31
US11194400B2 (en) 2021-12-07

Similar Documents

Publication Publication Date Title
WO2018196552A1 (zh) 用于虚拟现实场景中的手型显示方法及装置
JP6982215B2 (ja) 検出された手入力に基づく仮想手ポーズのレンダリング
CN107132917B (zh) 用于虚拟现实场景中的手型显示方法及装置
JP6644035B2 (ja) 手袋インタフェースオブジェクト及び方法
JP7256283B2 (ja) 情報処理方法、処理装置、電子機器及び記憶媒体
US9933851B2 (en) Systems and methods for interacting with virtual objects using sensory feedback
US10317997B2 (en) Selection of optimally positioned sensors in a glove interface object
CN108646997A (zh) 一种虚拟及增强现实设备与其他无线设备进行交互的方法
CN107890664A (zh) 信息处理方法及装置、存储介质、电子设备
CN107168530A (zh) 虚拟场景中的对象处理方法和装置
CN112783328A (zh) 提供虚拟空间的方法、提供虚拟体验的方法、程序以及记录介质
WO2016189372A2 (en) Methods and apparatus for human centric "hyper ui for devices"architecture that could serve as an integration point with multiple target/endpoints (devices) and related methods/system with dynamic context aware gesture input towards a "modular" universal controller platform and input device virtualization
CN113892074A (zh) 用于人工现实系统的手臂凝视驱动的用户界面元素选通
KR102021851B1 (ko) 가상현실 환경에서의 사용자와 객체 간 상호 작용 처리 방법
CN105324736A (zh) 触摸式和非触摸式用户交互输入的技术
CN107930114A (zh) 信息处理方法及装置、存储介质、电子设备
CN113841110A (zh) 具有用于选通用户界面元素的个人助理元素的人工现实系统
CN113892075A (zh) 用于人工现实系统的拐角识别手势驱动的用户界面元素选通
US11169605B2 (en) Operating method for wearable device interacting with operated device in virtual reality and operating device thereof
US20220355188A1 (en) Game program, game method, and terminal device
CN113467625A (zh) 虚拟现实的控制设备、头盔和交互方法
Olwal et al. Consigalo: multi-user face-to-face interaction on immaterial displays
CN108803862A (zh) 用于虚拟现实场景中的账号关系建立方法及装置
KR101962464B1 (ko) 손동작 매크로 기능을 이용하여 다중 메뉴 및 기능 제어를 위한 제스처 인식 장치
Davidson An evaluation of visual gesture based controls for exploring three dimensional environments

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18791489

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18791489

Country of ref document: EP

Kind code of ref document: A1