WO2022105919A1 - 虚拟现实设备的局部透视方法、装置及虚拟现实设备 - Google Patents

虚拟现实设备的局部透视方法、装置及虚拟现实设备 Download PDF

Info

Publication number
WO2022105919A1
WO2022105919A1 PCT/CN2021/132144 CN2021132144W WO2022105919A1 WO 2022105919 A1 WO2022105919 A1 WO 2022105919A1 CN 2021132144 W CN2021132144 W CN 2021132144W WO 2022105919 A1 WO2022105919 A1 WO 2022105919A1
Authority
WO
WIPO (PCT)
Prior art keywords
hand
user
display area
partial perspective
virtual reality
Prior art date
Application number
PCT/CN2021/132144
Other languages
English (en)
French (fr)
Inventor
张明
Original Assignee
青岛小鸟看看科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 青岛小鸟看看科技有限公司 filed Critical 青岛小鸟看看科技有限公司
Priority to EP21894066.6A priority Critical patent/EP4227773A4/en
Publication of WO2022105919A1 publication Critical patent/WO2022105919A1/zh
Priority to US17/817,792 priority patent/US11861071B2/en
Priority to US18/501,696 priority patent/US20240061516A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0179Display position adjusting means not related to the information to be displayed
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/012Head tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/94Hardware or software architectures specially adapted for image or video understanding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/107Static hand or arm
    • G06V40/113Recognition of static hand signs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/28Recognition of hand or arm movements, e.g. recognition of deaf sign language
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • G02B2027/0178Eyeglass type
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0179Display position adjusting means not related to the information to be displayed
    • G02B2027/0187Display position adjusting means not related to the information to be displayed slaved to motion of at least a part of the body of the user, e.g. head, eye

Definitions

  • the present application relates to the technical field of virtual reality, in particular to a partial perspective method and device of a virtual reality device, and a virtual reality device.
  • VR glasses are a fully immersive experience device, so during use, you cannot pay attention to information from the outside world. However, in the actual use process, users often encounter situations where they need to know external information in a short time. Such as drinking water, operating mobile phones, communicating with people, etc.
  • the VR glasses in the prior art generally have a perspective function based on a camera.
  • the current perspective function mostly displays the entire field of view in perspective.
  • the user can no longer see the content of the virtual scene, and only the external real world scene is in front of him. This method of all switching will greatly affect the user experience, especially in scenes such as movie viewing.
  • the main purpose of the present application is to provide a partial perspective method, device and virtual reality device of a virtual reality device, which are used to solve the technical problem of poor user experience caused by the existing perspective method of the virtual reality device.
  • a partial perspective method for a virtual reality device including:
  • a partial see-through display area in the virtual scene is determined according to the position of the user's hand movement, so as to display the real scene in the partial see-through display area.
  • a partial perspective device of a virtual reality device comprising:
  • a hand motion recognition unit for recognizing the user's hand motion
  • a partial see-through function triggering unit configured to trigger the partial see-through function of the virtual reality device if the user's hand movement satisfies a preset trigger action
  • the partial see-through display area determination unit is configured to, under the partial see-through function, determine a partial see-through display area in the virtual scene according to the position of the user's hand movement, so as to display the real scene in the partial see-through display area.
  • a virtual reality device comprising: a processor, a memory storing computer-executable instructions,
  • the executable instructions when executed by the processor, implement the aforementioned partial perspective method of the virtual reality device.
  • a computer-readable storage medium stores one or more programs, the one or more programs, when executed by a processor, implement the aforementioned virtual Partial perspective method for real devices.
  • the partial perspective method of the virtual reality device first recognizes the user's hand movement, and then compares the user's hand movement with the preset triggering action, and then determines whether the user wants to trigger the virtual reality device. Partial perspective function; if the user's hand action matches the preset trigger action, the partial perspective function of the virtual reality device can be triggered; then under the partial perspective function, it can be further determined according to the position corresponding to the user's hand action The range of the area to be displayed in a partial perspective in the virtual scene, so that the user can perform corresponding operations in the real scene through the partial perspective display area without interrupting the user's immersive experience of the virtual scene.
  • the local perspective method of the virtual reality device can determine the range of the region to be displayed in perspective in combination with the user's hand movements. Compared with the traditional global perspective solution, it can be applied to more and richer usage scenarios, and can greatly Improve user experience.
  • FIG. 1 is a flowchart of a partial perspective method for a virtual reality device according to an embodiment of the present application
  • FIG. 2 is a schematic diagram of a wearing state of a virtual reality device according to an embodiment of the present application
  • FIG. 3 is a schematic diagram of a partial perspective display area in a virtual scene according to an embodiment of the present application.
  • FIG. 4 is a schematic diagram of a one-hand trigger action according to an embodiment of the present application.
  • FIG. 5 is a schematic diagram of a circular perspective display area in a virtual scene according to an embodiment of the present application.
  • FIG. 6 is a schematic diagram of a two-hand trigger action according to an embodiment of the present application.
  • FIG. 7 is a schematic diagram of a triangular perspective display area in a virtual scene according to an embodiment of the present application.
  • FIG. 8 is a schematic diagram of a preset closing action according to an embodiment of the present application.
  • FIG. 9 is a block diagram of a partial perspective device of a virtual reality device according to an embodiment of the application.
  • FIG. 10 is a schematic structural diagram of a virtual reality device in an embodiment of the present application.
  • Virtual reality technology is a computer simulation system that can create and experience virtual worlds. It uses computers to generate a simulated environment and immerse users in the environment. Virtual reality technology uses data in real life and electronic signals generated by computer technology to combine them with various output devices to transform them into phenomena that can be felt by people. These phenomena can be real objects in reality. It can also be objects that we cannot see with the naked eye but are represented by 3D models.
  • the virtual reality device in this application may refer to VR glasses.
  • the VR glasses use a head-mounted display device to block people's vision and hearing from the outside world, and guide the user to produce a feeling of being in a virtual environment.
  • the display principle is the left and right eyes.
  • the screen displays the images of the left and right eyes respectively, and the human eye generates a three-dimensional sense in the mind after obtaining this information with differences.
  • the following description will be given by taking VR glasses as a specific application example of a virtual reality device.
  • the partial perspective method for a virtual reality device includes the following steps S110 to S130:
  • Step S110 identifying the hand motion of the user.
  • the user's hand action can be recognized first.
  • generally existing VR glasses are equipped with a binocular camera at the external front end of the glasses, which is used to collect external environment information and capture the user's posture and motion information such as hand motion information.
  • computer vision technology is usually used for hand motion recognition, and the results of hand motion recognition are often used to perform user interface operations based on hand motions, or some hand motion games.
  • the information collected by the camera built in the existing VR glasses can also be used to identify the user's hand motion, so as to perform partial perspective display according to the hand motion.
  • a monocular camera or other types of cameras can also be used.
  • the specific type of camera used can be flexibly set by those skilled in the art according to actual needs. , which is not specifically limited here.
  • the following methods can be used: first, design hand motion features and hand motion models, and use hand motion samples to extract features, train the hand motion model, and finally establish a hand motion model. Action model. On this basis, a new hand motion image is collected by binocular camera and preprocessed, and then hand motion segmentation is performed on the hand motion image, so as to accurately extract the human hand part in the image, and then perform hand motion feature extraction. ; Finally, use the previously established hand motion model to classify and recognize the input hand motion.
  • the above-mentioned recognition of the user's hand movement may be real-time recognition, so as to respond to the user's needs in a timely manner.
  • the recognition of the hand movement may also be performed every preset time. , which frequency is specifically used to identify hand movements, those skilled in the art can flexibly set according to actual needs, which are not specifically limited here.
  • Step S120 if the user's hand motion satisfies the preset trigger motion, trigger the partial perspective function of the virtual reality device.
  • the recognized user's hand motion can be matched with the preset trigger action here. If the matching is successful, the partial perspective function of the VR glasses can be triggered at this time.
  • the type of the preset trigger action can be set flexibly by those skilled in the art according to actual needs, which is not specifically limited here.
  • triggering the partial perspective function of the virtual reality device in this step can be understood as only triggering the partial perspective function of the VR glasses.
  • the VR glasses have not entered the perspective state, that is to say, the user is still looking at the device. If the real scene is not reached, subsequent steps need to be performed to determine the partial perspective display area in the virtual scene.
  • the VR glasses have entered the perspective state, and the user can currently see the real scene, but in order to avoid too much impact on the user's immersive experience, you can re-determine the partial perspective display in the virtual scene through subsequent steps. area.
  • Step S130 under the partial see-through function, determine a partial see-through display area in the virtual scene according to the position of the user's hand movement, so as to display the real scene in the partial see-through display area.
  • the position of the hand movement can be determined in combination with the user's hand movement obtained in the above steps, and then the partial perspective display area is determined according to the specific position of the user's hand movement. As shown in FIG. 3 , the user can see the real scene through the partial see-through display area, and the user can still see the virtual scene for the rest of the partial see-through area.
  • the local perspective method of the virtual reality device can determine the range of the region to be displayed in perspective in combination with the user's hand movements. Compared with the traditional global perspective solution, it can be applied to more and richer usage scenarios, and can greatly Improve user experience.
  • the preset trigger action includes a single-hand trigger action
  • determining the partial perspective display area in the virtual scene according to the position of the user's hand action includes: if the user's one-handed hand The position of the one-hand index finger and one-hand thumb of the user's one-handed hand action is determined; according to the position of the one-handed index finger and one-handed thumb of the one-handed hand action, a virtual scene is generated. Circular perspective display area.
  • the preset trigger action in this embodiment of the present application may be a single-hand trigger action.
  • FIG. 4 a schematic diagram of a single-hand trigger action is provided. Similar to the "C" shaped action.
  • the positions of the one-hand index finger and one-hand thumb corresponding to the user's one-handed hand motion can be further determined. Then, according to the positions of the index finger and the thumb of the one hand, a circular perspective display area as shown in FIG. 5 is formed between the index finger and the thumb of the one hand.
  • the above-mentioned circular perspective display area will perspectively display the real scene captured by the camera on the VR glasses, and the user can operate the mobile phone or Pick up the water glass, and the circular see-through display area can move with the movement of the user's hand.
  • the preset trigger action includes a two-hand trigger action
  • determining the partial perspective display area in the virtual scene according to the position of the user's hand action includes: if the user's two-hand hand action If the two-hand trigger action is satisfied, the positions of the two index fingers and the two thumbs of the user's hands are determined; according to the positions of the two index fingers and the two thumbs of the hands, a triangular perspective display area is generated in the virtual scene.
  • the preset trigger action in this embodiment of the present application may also be a two-hand trigger action.
  • FIG. 6 a schematic diagram of a two-hand trigger action is provided.
  • the user's left thumb and right thumb are in contact, and the left index finger and right index finger are in contact. They are all located on the same plane, and the remaining fingers can be bent, retracted, or unfolded, so that the area enclosed by the left thumb, right thumb, left index finger and right index finger is a triangular area.
  • the positions of the two index fingers and the two thumbs corresponding to the hand movements of the user's hands can be further determined, and then According to the positions of the two index fingers and the two thumbs, a triangular perspective display area as shown in FIG. 7 is formed between the positions of the two index fingers and the two thumbs.
  • the triangular perspective display area will perspectively display the real scene captured by the camera on the VR glasses. Moving sideways, the range of the triangular perspective display area will gradually increase, so that the user can find the item in time.
  • triggering actions in addition to determining the partial perspective display area in the virtual scene based on the two triggering actions listed above, other triggering actions can also be flexibly set according to actual needs.
  • the user can draw the setting in front of his eyes
  • the track of the shape, the area enclosed by the track can be regarded as the area that the user wants to display in perspective.
  • the trajectory drawn by the user is a square trajectory
  • the area enclosed by the square trajectory can be displayed in perspective in the virtual scene formed by the VR glasses.
  • a more complex trigger condition can be further set, such as The duration of the user's hand-triggered action is counted, and if it exceeds the preset time threshold, it is considered that the user wants to trigger the partial perspective display function of the VR glasses.
  • the number of executions of actions triggered by the user's hand may also be counted, and if the preset number of executions is reached, it is considered that the user wants to trigger the partial perspective display function of the VR glasses. How to configure the triggering conditions of the partial perspective function can be flexibly set by those skilled in the art according to the actual situation, and will not be listed one by one here.
  • the method further includes: determining whether the position of the user's hand movement has changed; if there is a change, updating the partial perspective display in the virtual scene according to the changed position of the user's hand movement area.
  • the user's hand position may change in real time.
  • the hand position changes greatly if the local perspective display area is still determined according to the user's hand position before the change, it may lead to partial perspective display.
  • the embodiment of the present application can detect the position change of the user's hand movement in real time, and when it is detected that the position of the user's hand movement changes, the partial perspective display area can be re-determined according to the changed position of the user's hand movement .
  • the method further includes: if the user's hand action satisfies a preset closing action, turning off the partial perspective function of the virtual reality device.
  • the user's need to trigger the partial perspective display function of the VR glasses may only be temporary, such as temporarily answering the phone, temporarily drinking a glass of water, etc. Therefore, in order to ensure that the user can quickly return to the virtual state from the partial perspective display function state
  • it can also detect whether the user has made a hand movement to turn off the partial perspective function of the VR glasses. If it is detected that the user's hand movement matches the preset closing action, the VR can be turned off at this time. Partial see-through display function of glasses.
  • FIG. 8 a schematic diagram of a preset closing action is provided, and the user can turn off the partial perspective display function of the VR glasses by making a fist with both hands in front of the user.
  • other closing actions can also be flexibly set according to actual needs, which is not specifically limited here.
  • a more complex for the closing condition for example, the duration of the recognized user's hand closing action can be counted, and if it exceeds a preset time threshold, it is considered that the user wants to turn off the partial see-through display function of the VR glasses.
  • the number of execution times of the user's hand closing action may also be counted, and if the preset number of execution times is reached, it is considered that the user wants to turn off the partial perspective display function of the VR glasses.
  • how to configure the closing condition of the partial perspective function can be flexibly set by those skilled in the art according to the actual situation, which will not be listed one by one here.
  • FIG. 9 shows a block diagram of a partial perspective device of a virtual reality device according to an embodiment of the present application.
  • the partial perspective device 900 of a virtual reality device includes: a hand motion recognition unit 910 , a partial perspective function triggering unit 920 and a partial perspective unit 920 .
  • the see-through display area determination unit 930 in,
  • a hand motion recognition unit 910 configured to recognize the user's hand motion
  • the partial see-through function triggering unit 920 is configured to trigger the partial see-through function of the virtual reality device if the user's hand motion satisfies the preset trigger action;
  • the partial see-through display area determining unit 930 is configured to determine the partial see-through display area in the virtual scene according to the position of the user's hand movement under the partial see-through function, so as to display the real scene in the partial see-through display area.
  • the preset trigger action includes a one-hand trigger action
  • the partial perspective display area determination unit 930 is specifically configured to: if the user's one-hand hand action satisfies the one-hand trigger action, determine the user's one-hand trigger action The positions of the one-hand index finger and one-hand thumb for hand movements; according to the positions of the one-hand index finger and one-hand thumb for one-hand hand movements, a circular perspective display area is generated in the virtual scene.
  • the preset trigger action includes a two-hand trigger action
  • the partial perspective display area determination unit 930 is specifically configured to: if the user's two-hand hand action satisfies the two-hand trigger action, determine the two-hand hand action of the user. The positions of the two index fingers and the two thumbs; according to the positions of the two index fingers and the two thumbs of the hands of both hands, a triangular perspective display area is generated in the virtual scene.
  • the device further includes: a position change determination unit for determining whether the position of the user's hand movement has changed; a partial perspective display area update unit for if the position of the user's hand movement changes If a change occurs, the partial perspective display area in the virtual scene is updated according to the changed position of the user's hand movement.
  • the apparatus further includes: a partial see-through function closing unit, configured to disable the partial see-through function of the virtual reality device if the user's hand motion satisfies a preset closing action.
  • FIG. 10 is a schematic structural diagram of a virtual reality device.
  • the virtual reality device includes a memory, a processor, and optionally an interface module, a communication module, and the like.
  • the memory may include memory, such as high-speed random-access memory (Random-Access Memory, RAM), or may include non-volatile memory (non-volatile memory), such as at least one disk memory, and the like.
  • RAM random-Access Memory
  • non-volatile memory such as at least one disk memory, and the like.
  • the virtual reality device may also include hardware required by other services.
  • the processor, interface module, communication module and memory can be connected to each other through an internal bus, which can be an ISA (Industry Standard Architecture) bus, a PCI (Peripheral Component Interconnect, peripheral component interconnect standard) bus or EISA (Extended Industry Standard Architecture, Extended Industry Standard Architecture) bus, etc.
  • the bus can be divided into address bus, data bus, control bus and so on. For ease of representation, only one bidirectional arrow is shown in FIG. 10, but it does not mean that there is only one bus or one type of bus.
  • Memory for storing computer-executable instructions.
  • the memory provides computer-executable instructions to the processor through an internal bus.
  • the processor executes the computer-executable instructions stored in the memory, and is specifically configured to implement the following operations:
  • the partial perspective function of the virtual reality device is triggered
  • the partial see-through display area in the virtual scene is determined according to the position of the user's hand movement, so as to display the real scene in the partial see-through display area.
  • the functions performed by the partial perspective apparatus of the virtual reality device disclosed in the embodiment shown in FIG. 9 of the present application can be applied to the processor, or implemented by the processor.
  • the processor may be an integrated circuit chip with signal processing capability.
  • each step of the above-mentioned method can be completed by a hardware integrated logic circuit in a processor or an instruction in the form of software.
  • the above-mentioned processor can be a general-purpose processor, including a central processing unit (Central Processing Unit, CPU), a network processor (Network Processor, NP), etc.; it can also be a digital signal processor (Digital Signal Processor, DSP), dedicated integrated Circuit (Application Specific Integrated Circuit, ASIC), Field Programmable Gate Array (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components.
  • DSP Digital Signal Processor
  • ASIC Application Specific Integrated Circuit
  • FPGA Field Programmable Gate Array
  • a general-purpose processor may be a microprocessor, or the processor may be any conventional processor, or the like.
  • the steps of the method disclosed in conjunction with the embodiments of the present application may be directly embodied as being executed and completed by a hardware decoding processor, or executed and completed by a combination of hardware and software modules in the decoding processor.
  • the software modules may be located in random access memory, flash memory, read-only memory, programmable read-only memory or electrically erasable programmable memory, registers and other storage media mature in the art.
  • the storage medium is located in the memory, and the processor reads the information in the memory, and completes the steps of the above method in combination with its hardware.
  • the virtual reality device can also perform the steps performed by the partial perspective method of the virtual reality device in FIG. 1 , and realize the functions of the partial perspective method of the virtual reality device in the embodiment shown in FIG.
  • the embodiments of the present application also provide a computer-readable storage medium, where the computer-readable storage medium stores one or more programs, and when the one or more programs are executed by the processor, realize the partial perspective of the aforementioned virtual reality device method, and is specifically used to execute:
  • the partial perspective function of the virtual reality device is triggered
  • the partial see-through display area in the virtual scene is determined according to the position of the user's hand movement, so as to display the real scene in the partial see-through display area.
  • the embodiments of the present application may be provided as a method, a system, or a computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) embodying computer-usable program code.
  • computer-usable storage media including, but not limited to, disk storage, CD-ROM, optical storage, etc.
  • These computer program instructions may also be stored in a computer-readable memory capable of directing a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory result in an article of manufacture comprising instruction means, the instructions
  • the apparatus implements the functions specified in the flow or flow of the flowcharts and/or the block or blocks of the block diagrams.
  • a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
  • processors CPUs
  • input/output interfaces network interfaces
  • memory volatile and non-volatile memory
  • Memory may include forms of non-persistent memory in computer readable media, random access memory (RAM) and/or non-volatile memory, such as read only memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
  • RAM random access memory
  • ROM read only memory
  • flash RAM flash memory
  • Computer-readable media includes both persistent and non-permanent, removable and non-removable media, and storage of information may be implemented by any method or technology.
  • Information may be computer readable instructions, data structures, modules of programs, or other data.
  • Examples of computer storage media include, but are not limited to, phase-change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), read only memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), Flash Memory or other memory technology, Compact Disc Read Only Memory (CD-ROM), Digital Versatile Disc (DVD) or other optical storage, Magnetic tape cartridges, magnetic tape magnetic disk storage or other magnetic storage devices or any other non-transmission medium that can be used to store information that can be accessed by a computing device.
  • computer-readable media does not include transitory computer-readable media, such as modulated data signals and carrier waves.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Optics & Photonics (AREA)
  • Software Systems (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

本申请公开了一种虚拟现实设备的局部透视方法、装置及虚拟现实设备。所述方法包括:识别用户的手部动作;若用户的所述手部动作满足预设触发动作,则触发所述虚拟现实设备的局部透视功能;在所述局部透视功能下,根据用户的所述手部动作的位置确定虚拟场景中的局部透视显示区域,以在所述局部透视显示区域显示现实场景。本申请的虚拟现实设备的局部透视方法可以结合用户的手部动作确定要透视显示的区域范围,相比于传统的全局透视方案,可以适用于更多更丰富的使用场景,同时可以大大提高用户的使用体验。

Description

虚拟现实设备的局部透视方法、装置及虚拟现实设备 技术领域
本申请涉及虚拟现实技术领域,具体涉及一种虚拟现实设备的局部透视方法、装置及虚拟现实设备。
背景技术
虚拟现实眼镜(简称“VR眼镜”)是一种全沉浸式的体验设备,因此在使用过程中,无法关注到外界的信息。但是在实际使用过程中,用户经常会遇到需要短时间了解外界信息的情况。比如喝水、操作手机、跟人沟通等情况。
现有技术中的VR眼镜,一般都有基于摄像头的透视功能。但是目前的透视功能,多是将全部视野进行透视显示,用户在透视模式下,无法再看到虚拟场景的内容,眼前只有外部的现实世界景象。这种全部切换的方式,会大大影响用户体验,尤其在观影等场景下。
发明内容
有鉴于此,本申请的主要目的在于提供了一种虚拟现实设备的局部透视方法、装置及虚拟现实设备,用于解决现有的虚拟现实设备的透视方法导致用户体验较差的技术问题。
依据本申请的第一方面,提供了一种虚拟现实设备的局部透视方法,包括:
识别用户的手部动作;
若用户的所述手部动作满足预设触发动作,则触发所述虚拟现实设备的局部透视功能;
在所述局部透视功能下,根据用户的所述手部动作的位置确定虚拟场 景中的局部透视显示区域,以在所述局部透视显示区域显示现实场景。
依据本申请的第二方面,提供了一种虚拟现实设备的局部透视装置,包括:
手部动作识别单元,用于识别用户的手部动作;
局部透视功能触发单元,用于若用户的所述手部动作满足预设触发动作,则触发所述虚拟现实设备的局部透视功能;
局部透视显示区域确定单元,用于在所述局部透视功能下,根据用户的所述手部动作的位置确定虚拟场景中的局部透视显示区域,以在所述局部透视显示区域显示现实场景。
依据本申请的第三方面,提供了一种虚拟现实设备,包括:处理器,存储计算机可执行指令的存储器,
所述可执行指令在被所述处理器执行时,实现前述虚拟现实设备的局部透视方法。
依据本申请的第四方面,提供了一种计算机可读存储介质,所述计算机可读存储介质存储一个或多个程序,所述一个或多个程序当被处理器执行时,实现前述的虚拟现实设备的局部透视方法。
本申请采用的上述至少一个技术方案能够达到以下有益效果:
本申请实施例的虚拟现实设备的局部透视方法,先对用户的手部动作进行识别,然后将用户的手部动作与预设触发动作进行比较,进而可以确定用户是否想要触发虚拟现实设备的局部透视功能;如果用户的手部动作与预设触发动作相匹配,则可以触发虚拟现实设备的局部透视功能;之后在该局部透视功能下,可以进一步根据用户的手部动作所对应的位置确定在虚拟场景中所要进行局部透视显示的区域范围,进而使得用户可以透过该局部透视显示区域进行现实场景下的相应操作,同时又不会打断用户对于虚拟场景的沉浸式体验。本申请实施例的虚拟现实设备的局部透视方法可以结合用户的手部动作确定要透视显示的区域范围,相比于传统的全局透视方案,可以适用于更多更丰富的使用场景,同时可以大大提高用户的使用体验。
附图说明
通过阅读下文优选实施方式的详细描述,各种其他的优点和益处对于本领域普通技术人员将变得清楚明了。附图仅用于示出优选实施方式的目的,而并不认为是对本申请的限制。而且在整个附图中,用相同的参考符号表示相同的部件。在附图中:
图1为本申请一个实施例的虚拟现实设备的局部透视方法的流程图;
图2为本申请一个实施例的虚拟现实设备的佩戴状态示意图;
图3为本申请一个实施例的虚拟场景中的局部透视显示区域示意图;
图4为本申请一个实施例的单手触发动作的示意图;
图5为本申请一个实施例的虚拟场景中的圆形透视显示区域示意图;
图6为本申请一个实施例的双手触发动作的示意图;
图7为本申请一个实施例的虚拟场景中的三角形透视显示区域示意图;
图8为本申请一个实施例的预设关闭动作的示意图;
图9为本申请一个实施例的虚拟现实设备的局部透视装置的框图;
图10为本申请一个实施例中虚拟现实设备的结构示意图。
具体实施方式
下面将参照附图更详细地描述本申请的示例性实施例。提供这些实施例是为了能够更透彻地理解本申请,并且能够将本申请的范围完整的传达给本领域的技术人员。虽然附图中显示了本申请的示例性实施例,然而应当理解,可以以各种形式实现本申请而不应被这里阐述的实施例所限制。
虚拟现实技术是一种可以创建和体验虚拟世界的计算机仿真系统,它利用计算机生成一种模拟环境,使用户沉浸到该环境中。虚拟现实技术利用现实生活中的数据,通过计算机技术产生的电子信号,将其与各种输出设备结合使其转化为能够让人们感受到的现象,这些现象可以是现实中真真切切的物体,也可以是我们肉眼所看不到的但通过三维模型表现出来的物体。
本申请的虚拟现实设备可以是指VR眼镜,VR眼镜是利用头戴式显示设备将人对外界的视觉、听觉封闭,引导用户产生一种身在虚拟环境中的感觉,其显示原理是左右眼屏幕分别显示左右眼的图像,人眼获取这种带有差异的信息后在脑海中产生立体感。为描述方便,下面将以VR眼镜作为一种具体的虚拟现实设备的应用示例进行描述。
图1示出了根据本申请一个实施例的虚拟现实设备的局部透视方法的流程示意图,参见图1,本申请实施例的虚拟现实设备的局部透视方法包括如下步骤S110至步骤S130:
步骤S110,识别用户的手部动作。
在进行虚拟现实设备的局部透视显示时,可以先识别用户的手部动作。如图2所示,一般现有的VR眼镜都会在眼镜外部前端设置有双目摄像头,用于采集外部环境信息,捕捉用户的姿态运动信息如手部动作信息等。在现有的虚拟现实应用场景下,通常利用计算机视觉技术进行手部动作识别,手部动作识别的结果往往被用来进行基于手部动作的用户界面操作,或者一些手部体感游戏等。在本申请实施例中,同样可以利用现有的VR眼镜中自带的摄像头所采集到的信息进行用户手部动作的识别,以根据手部动作进行局部透视显示。
需要说明的是,除了可以采用上述双目摄像头进行手部动作信息的采集,也可以采用单目摄像头或者其他类型的摄像头,具体采用何种类型的摄像头,本领域技术人员可根据实际需求灵活设置,在此不做具体限定。
在利用计算机视觉技术进行手部动作识别时,具体可以采用如下方法:首先设计手部动作特征和手部动作模型,并利用手部动作样本提取特征,对手部动作模型进行训练,最终建立手部动作模型。在此基础上,通过双目摄像头采集新的手部动作图像并进行预处理,接着对手部动作图像进行手部动作分割,从而比较准确地提取图像中的人手部分,然后进行手部动作特征提取;最后,利用前面建立好的手部动作模型对输入的手部动作进行分类识别。
当然除了上述识别方法,本领域技术人员也可以根据实际需求选择其 他方式进行手部动作识别,在此不作具体限定。
此外,上述对于用户的手部动作的识别可以是实时识别,以便于对用户的需求及时作出响应,当然出于节省设备电量的考虑,也可以是每隔预设时间进行一次手部动作的识别,具体采用何种频率识别手部动作,本领域技术人员可根据实际需求灵活设置,在此不做具体限定。
步骤S120,若用户的手部动作满足预设触发动作,则触发虚拟现实设备的局部透视功能。
在得到用户的手部动作后,需要进一步确定用户的手部动作是否是要触发VR眼镜的局部透视功能的动作,因此这里可以将识别到的用户的手部动作与预设触发动作进行匹配,如果匹配成功,则此时可以触发VR眼镜的局部透视功能。预设触发动作的类型本领域技术人员可以根据实际需求灵活设置,在此不作具体限定。
需要说明的是,该步骤中的“触发虚拟现实设备的局部透视功能”可以理解为只是触发了VR眼镜的局部透视功能,本质上VR眼镜还没有进入透视状态,也就是说,用户当前还看不到现实场景,需要进行后续的步骤确定虚拟场景中的局部透视显示区域。当然,也可以理解为是VR眼镜已经进入透视状态,用户当前能够看到现实场景,但是为了避免对用户的沉浸式体验造成过多影响,可以通过后续的步骤重新确定虚拟场景中的局部透视显示区域。
步骤S130,在局部透视功能下,根据用户的手部动作的位置确定虚拟场景中的局部透视显示区域,以在局部透视显示区域显示现实场景。
在确定虚拟场景中的局部透视显示区域时,可以结合上述步骤得到的用户的手部动作确定手部动作的位置,然后根据用户的手部动作的具体位置确定局部透视显示区域。如图3所示,用户可以通过该局部透视显示区域看到现实场景,而对于局部透视区域以外的其余部分,用户仍然能够看到虚拟场景。
本申请实施例的虚拟现实设备的局部透视方法可以结合用户的手部动作确定要透视显示的区域范围,相比于传统的全局透视方案,可以适用于 更多更丰富的使用场景,同时可以大大提高用户的使用体验。
在本申请的一个实施例中,预设触发动作包括单手触发动作,在局部透视功能下,根据用户的手部动作的位置确定虚拟场景中的局部透视显示区域包括:若用户的单手手部动作满足单手触发动作,则确定用户的单手手部动作的单手食指和单手拇指的位置;根据单手手部动作的单手食指和单手拇指的位置,在虚拟场景中生成圆形透视显示区域。
本申请实施例的预设触发动作可以是单手触发动作,如图4所示,提供了一种单手触发动作的示意图,用户的单手手掌向内侧弯曲,拇指和其余四指相对,作出类似于“C”形的动作。为了后续可以生成更精准的局部透视显示区域,如果识别到的用户的手部动作满足上述单手触发动作,可以进一步确定用户的单手手部动作对应的单手食指和单手拇指的位置,然后根据单手食指和单手拇指的位置,在单手食指和单手拇指之间形成一个如图5所示的圆形透视显示区域。
例如,在用户想要使用手机或者拿水杯等场景下,上述圆形透视显示区域中就会透视显示出VR眼镜上的摄像头拍摄到的现实场景,用户可以通过该圆形透视显示区域操作手机或者拿起水杯,且该圆形透视显示区域可以随着用户手部的移动而移动。
在本申请的一个实施例中,预设触发动作包括双手触发动作,在局部透视功能下,根据用户的手部动作的位置确定虚拟场景中的局部透视显示区域包括:若用户的双手手部动作满足双手触发动作,则确定用户的双手手部动作的两个食指和两个拇指的位置;根据双手手部动作的两个食指和两个拇指的位置,在虚拟场景中生成三角形透视显示区域。
本申请实施例的预设触发动作还可以是双手触发动作,如图6所示,提供了一种双手触发动作的示意图,用户的左手拇指和右手拇指相接触,左手食指和右手食指相接触,且均位于同一平面上,其余手指可弯曲收回也可展开,使得左手拇指、右手拇指、左手食指和右手食指所围成的区域为一个三角形区域。为了后续可以生成更精准的局部透视显示区域,如果识别到的用户的双手手部动作满足上述双手触发动作,可以进一步确定用 户的双手手部动作对应的两个食指和两个拇指的位置,然后根据两个食指和两个拇指的位置,在两个食指和两个拇指的位置之间形成一个如图7所示的三角形透视显示区域。
例如,在用户需要寻找物品的场景下,可能需要在较大范围内进行透视显示,上述三角形透视显示区域中会透视显示出VR眼镜上的摄像头拍摄到的现实场景,随着用户的双手向两侧移动,三角形透视显示区域的范围会逐渐增大,以便于用户及时找到物品。
在本申请的一个实施例中,除了基于上述列举的两种触发动作确定虚拟场景中的局部透视显示区域,也可以根据实际需求灵活设置其他的触发动作,例如,用户可以在眼前画出设定形状的轨迹,该轨迹所围成的区域就可以看作是用户想要进行透视显示的区域。比如用户画出的轨迹为正方形轨迹,则可以在VR眼镜所形成的虚拟场景中将该正方形轨迹所围成的区域进行透视显示。
在本申请的一个实施例中,为了避免用户误触发VR眼镜的局部透视显示功能,在用户的手部动作满足预设触发动作时,还可以进一步设置更复杂的触发条件,例如可以对识别到的用户的手部触发动作的持续时间进行统计,如果超过预设时间阈值,则认为用户想要触发VR眼镜的局部透视显示功能。还可以对用户的手部触发动作的执行次数进行统计,如果达到预设执行次数,则认为用户想要触发VR眼镜的局部透视显示功能。具体如何配置局部透视功能的触发条件,本领域技术人员可根据实际情况灵活设置,在此不一一列举。
在本申请的一个实施例中,该方法还包括:确定用户的手部动作的位置是否发生变化;若发生变化,则根据变化后的用户的手部动作的位置更新虚拟场景中的局部透视显示区域。
在实际的应用场景下,用户的手部位置可能是实时变化的,在手部位置变化较大时,如果仍然根据变化前的用户的手部位置确定局部透视显示区域,可能会导致局部透视显示区域与用户的手部无法充分匹配的情况,即用户有可能在该局部透视显示区域无法看到自己所要看到的物品或者只 能看到一部分。因此本申请实施例可以实时检测用户的手部动作的位置变化情况,当检测到用户的手部动作的位置发生变化时,可以根据变化后的用户的手部动作的位置重新确定局部透视显示区域。
在本申请的一个实施例中,该方法还包括:若用户的手部动作满足预设关闭动作,则关闭虚拟现实设备的局部透视功能。
在实际的应用场景下,用户触发VR眼镜的局部透视显示功能的需求可能只是暂时的,例如暂时接听电话,暂时喝一杯水等等,因此为了保证用户从局部透视显示功能状态能够快速回到虚拟场景的沉浸式体验中,还可以检测用户是否有做出要关闭VR眼镜的局部透视功能的手部动作,如果检测到用户的手部动作与预设关闭动作相匹配,则此时可以关闭VR眼镜的局部透视显示功能。
如图8所示,提供了一种预设关闭动作的示意图,用户通过在眼前做出双手握拳的动作就可以关闭VR眼镜的局部透视显示功能。当然除了基于图8所示的预设关闭动作关闭虚拟场景中的局部透视显示区域,也可以根据实际需求灵活设置其他的关闭动作,在此不作具体限定。
在本申请的一个实施例中,与局部透视显示功能的触发条件同理,为了避免用户误关闭局部透视显示功能,在用户的手部动作满足预设关闭动作时,还可以进一步设置更复杂的关闭条件,例如可以对识别到的用户的手部关闭动作的持续时间进行统计,如果超过预设时间阈值,则认为用户想要关闭VR眼镜的局部透视显示功能。还可以对用户的手部关闭动作的执行次数进行统计,如果达到预设执行次数,则认为用户想要关闭VR眼镜的局部透视显示功能。具体如何配置局部透视功能的关闭条件,本领域技术人员可根据实际情况灵活设置,在此不一一列举。
与前述虚拟现实设备的局部透视方法同属于一个技术构思,本申请实施例还提供了虚拟现实设备的局部透视装置。图9示出了本申请一个实施例的虚拟现实设备的局部透视装置的框图,参见图9,虚拟现实设备的局部透视装置900包括:手部动作识别单元910、局部透视功能触发单元920和局部透视显示区域确定单元930。其中,
手部动作识别单元910,用于识别用户的手部动作;
局部透视功能触发单元920,用于若用户的手部动作满足预设触发动作,则触发虚拟现实设备的局部透视功能;
局部透视显示区域确定单元930,用于在局部透视功能下,根据用户的手部动作的位置确定虚拟场景中的局部透视显示区域,以在局部透视显示区域显示现实场景。
在本申请的一个实施例中,预设触发动作包括单手触发动作,局部透视显示区域确定单元930具体用于:若用户的单手手部动作满足单手触发动作,则确定用户的单手手部动作的单手食指和单手拇指的位置;根据单手手部动作的单手食指和单手拇指的位置,在虚拟场景中生成圆形透视显示区域。
在本申请的一个实施例中,预设触发动作包括双手触发动作,局部透视显示区域确定单元930具体用于:若用户的双手手部动作满足双手触发动作,则确定用户的双手手部动作的两个食指和两个拇指的位置;根据双手手部动作的两个食指和两个拇指的位置,在虚拟场景中生成三角形透视显示区域。
在本申请的一个实施例中,该装置还包括:位置变化确定单元,用于确定用户的手部动作的位置是否发生变化;局部透视显示区域更新单元,用于若用户的手部动作的位置发生变化,则根据变化后的用户的手部动作的位置更新虚拟场景中的局部透视显示区域。
在本申请的一个实施例中,该装置还包括:局部透视功能关闭单元,用于若用户的手部动作满足预设关闭动作,则关闭虚拟现实设备的局部透视功能。
图10示意了虚拟现实设备的结构示意图。请参考图10,在硬件层面,该虚拟现实设备包括存储器和处理器,可选地还包括接口模块、通信模块等。存储器可以包含内存,例如高速随机存取存储器(Random-Access Memory,RAM),也可以包括非易失性存储器(non-volatile memory),例如至少一个磁盘存储器等。当然,该虚拟现实设备还可以包括其他业务所 需要的硬件。
处理器、接口模块、通信模块和存储器可以通过内部总线相互连接,该内部总线可以是ISA(Industry Standard Architecture,工业标准体系结构)总线、PCI(Peripheral Component Interconnect,外设部件互连标准)总线或EISA(Extended Industry Standard Architecture,扩展工业标准结构)总线等。总线可以分为地址总线、数据总线、控制总线等。为便于表示,图10中仅用一个双向箭头表示,但并不表示仅有一根总线或一种类型的总线。
存储器,用于存放计算机可执行指令。存储器通过内部总线向处理器提供计算机可执行指令。
处理器,执行存储器所存放的计算机可执行指令,并具体用于实现以下操作:
识别用户的手部动作;
若用户的手部动作满足预设触发动作,则触发虚拟现实设备的局部透视功能;
在局部透视功能下,根据用户的手部动作的位置确定虚拟场景中的局部透视显示区域,以在局部透视显示区域显示现实场景。
上述如本申请图9所示实施例揭示的虚拟现实设备的局部透视装置执行的功能可以应用于处理器中,或者由处理器实现。处理器可以是一种集成电路芯片,具有信号的处理能力。在实现过程中,上述方法的各步骤可以通过处理器中的硬件的集成逻辑电路或者软件形式的指令完成。上述的处理器可以是通用处理器,包括中央处理器(Central Processing Unit,CPU)、网络处理器(Network Processor,NP)等;还可以是数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。可以实现或者执行本申请实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器,或者该处理器也可以是任何常规的处理器等。结合本申请实施例所公开的方法的步骤可以直接体现为硬件译码处理器执 行完成,或者用译码处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器,处理器读取存储器中的信息,结合其硬件完成上述方法的步骤。
该虚拟现实设备还可执行图1中虚拟现实设备的局部透视方法执行的步骤,并实现虚拟现实设备的局部透视方法在图1所示实施例的功能,本申请实施例在此不再赘述。
本申请实施例还提出了一种计算机可读存储介质,该计算机可读存储介质存储一个或多个程序,该一个或多个程序当被处理器执行时,实现前述的虚拟现实设备的局部透视方法,并具体用于执行:
识别用户的手部动作;
若用户的手部动作满足预设触发动作,则触发虚拟现实设备的局部透视功能;
在局部透视功能下,根据用户的手部动作的位置确定虚拟场景中的局部透视显示区域,以在局部透视显示区域显示现实场景。
本领域内的技术人员应明白,本申请的实施例可提供为方法、系统、或计算机程序产品。因此,本申请可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本申请可采用在一个或多个包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
本申请是根据本申请实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器中,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
在一个典型的配置中,计算设备包括一个或多个处理器(CPU)、输入/输出接口、网络接口和内存。
内存可能包括计算机可读介质中的非永久性存储器,随机存取存储器(RAM)和/或非易失性内存等形式,如只读存储器(ROM)或闪存(flash RAM)。内存是计算机可读介质的示例。
计算机可读介质包括永久性和非永久性、可移动和非可移动媒体可以由任何方法或技术来实现信息存储。信息可以是计算机可读指令、数据结构、程序的模块或其他数据。计算机的存储介质的例子包括,但不限于相变内存(PRAM)、静态随机存取存储器(SRAM)、动态随机存取存储器(DRAM)、其他类型的随机存取存储器(RAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、快闪记忆体或其他内存技术、只读光盘只读存储器(CD-ROM)、数字多功能光盘(DVD)或其他光学存储、磁盒式磁带,磁带磁磁盘存储或其他磁性存储设备或任何其他非传输介质,可用于存储可以被计算设备访问的信息。按照本文中的界定,计算机可读介质不包括暂存电脑可读媒体(transitory media),如调制的数据信号和载波。
还需要说明的是,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、商品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还 包括为这种过程、方法、商品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括要素的过程、方法、商品或者设备中还存在另外的相同要素。
以上仅为本申请的实施例而已,并不用于限制本申请。对于本领域技术人员来说,本申请可以有各种更改和变化。凡在本申请的精神和原理之内所作的任何修改、等同替换、改进等,均应包含在本申请的权利要求范围之内。

Claims (11)

  1. 一种虚拟现实设备的局部透视方法,其特征在于,包括:
    识别用户的手部动作;
    若用户的所述手部动作满足预设触发动作,则触发所述虚拟现实设备的局部透视功能;
    在所述局部透视功能下,根据用户的所述手部动作的位置确定虚拟场景中的局部透视显示区域,以在所述局部透视显示区域显示现实场景。
  2. 根据权利要求1所述的方法,其特征在于,所述预设触发动作包括单手触发动作,在所述局部透视功能下,根据用户的所述手部动作的位置确定虚拟场景中的局部透视显示区域包括:
    若用户的单手手部动作满足所述单手触发动作,则确定用户的所述单手手部动作的单手食指和单手拇指的位置;
    根据所述单手手部动作的单手食指和单手拇指的位置,在所述虚拟场景中生成圆形透视显示区域。
  3. 根据权利要求1所述的方法,其特征在于,所述预设触发动作包括双手触发动作,在所述局部透视功能下,根据用户的所述手部动作的位置确定虚拟场景中的局部透视显示区域包括:
    若用户的双手手部动作满足所述双手触发动作,则确定用户的所述双手手部动作的两个食指和两个拇指的位置;
    根据所述双手手部动作的两个食指和两个拇指的位置,在所述虚拟场景中生成三角形透视显示区域。
  4. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    确定用户的所述手部动作的位置是否发生变化;
    若发生变化,则根据变化后的用户的所述手部动作的位置更新所述虚拟场景中的局部透视显示区域。
  5. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    若用户的手部动作满足预设关闭动作,则关闭所述虚拟现实设备的局部透视功能。
  6. 一种虚拟现实设备的局部透视装置,其特征在于,包括:
    手部动作识别单元,用于识别用户的手部动作;
    局部透视功能触发单元,用于若用户的所述手部动作满足预设触发动作,则触发所述虚拟现实设备的局部透视功能;
    局部透视显示区域确定单元,用于在所述局部透视功能下,根据用户的所述手部动作的位置确定虚拟场景中的局部透视显示区域,以在所述局部透视显示区域显示现实场景。
  7. 根据权利要求6所述的装置,其特征在于,所述预设触发动作包括单手触发动作,所述局部透视显示区域确定单元具体用于:
    若用户的单手手部动作满足所述单手触发动作,则确定用户的所述单手手部动作的单手食指和单手拇指的位置;
    根据所述单手手部动作的单手食指和单手拇指的位置,在所述虚拟场景中生成圆形透视显示区域。
  8. 根据权利要求6所述的装置,其特征在于,所述预设触发动作包括双手触发动作,所述局部透视显示区域确定单元具体用于:
    若用户的双手手部动作满足所述双手触发动作,则确定用户的所述双手手部动作的两个食指和两个拇指的位置;
    根据所述双手手部动作的两个食指和两个拇指的位置,在所述虚拟场景中生成三角形透视显示区域。
  9. 根据权利要求6所述的装置,其特征在于,所述装置还包括:
    位置变化确定单元,用于确定用户的所述手部动作的位置是否发生变化;
    局部透视显示区域更新单元,用于若用户的手部动作的位置发生变化,则根据变化后的用户的所述手部动作的位置更新所述虚拟场景 中的局部透视显示区域。
  10. 根据权利要求6所述的装置,其特征在于,所述装置还包括:
    局部透视功能关闭单元,用于若用户的手部动作满足预设关闭动作,则关闭虚拟现实设备的局部透视功能。
  11. 一种虚拟现实设备,其特征在于,包括:处理器,存储计算机可执行指令的存储器,
    所述可执行指令在被所述处理器执行时,实现所述权利要求1至5之任一所述虚拟现实设备的局部透视方法。
PCT/CN2021/132144 2020-11-23 2021-11-22 虚拟现实设备的局部透视方法、装置及虚拟现实设备 WO2022105919A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
EP21894066.6A EP4227773A4 (en) 2020-11-23 2021-11-22 LOCAL VIEWING METHOD AND APPARATUS FOR VIRTUAL REALITY DEVICE AND VIRTUAL REALITY DEVICE
US17/817,792 US11861071B2 (en) 2020-11-23 2022-08-05 Local perspective method and device of virtual reality equipment and virtual reality equipment
US18/501,696 US20240061516A1 (en) 2020-11-23 2023-11-03 Local perspective method and device of virtual reality equipment and virtual reality equipment

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011319048.0A CN112462937B (zh) 2020-11-23 2020-11-23 虚拟现实设备的局部透视方法、装置及虚拟现实设备
CN202011319048.0 2020-11-23

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/817,792 Continuation US11861071B2 (en) 2020-11-23 2022-08-05 Local perspective method and device of virtual reality equipment and virtual reality equipment

Publications (1)

Publication Number Publication Date
WO2022105919A1 true WO2022105919A1 (zh) 2022-05-27

Family

ID=74798187

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/132144 WO2022105919A1 (zh) 2020-11-23 2021-11-22 虚拟现实设备的局部透视方法、装置及虚拟现实设备

Country Status (4)

Country Link
US (2) US11861071B2 (zh)
EP (1) EP4227773A4 (zh)
CN (1) CN112462937B (zh)
WO (1) WO2022105919A1 (zh)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112462937B (zh) * 2020-11-23 2022-11-08 青岛小鸟看看科技有限公司 虚拟现实设备的局部透视方法、装置及虚拟现实设备
CN113282166A (zh) * 2021-05-08 2021-08-20 青岛小鸟看看科技有限公司 头戴式显示设备的交互方法、装置及头戴式显示设备
CN115661942B (zh) * 2022-12-15 2023-06-27 广州卓远虚拟现实科技有限公司 基于虚拟现实的动作数据处理方法、系统及云平台

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105324738A (zh) * 2013-06-07 2016-02-10 索尼电脑娱乐公司 在头戴式显示器中切换操作模式
US20160054565A1 (en) * 2013-03-29 2016-02-25 Sony Corporation Information processing device, presentation state control method, and program
CN108700936A (zh) * 2016-03-29 2018-10-23 谷歌有限责任公司 用于虚拟现实的直通相机用户界面元素
US20190320138A1 (en) * 2018-04-12 2019-10-17 Microsoft Technology Licensing, Llc Real-world awareness for virtual reality
CN112445341A (zh) * 2020-11-23 2021-03-05 青岛小鸟看看科技有限公司 虚拟现实设备的键盘透视方法、装置及虚拟现实设备
CN112462937A (zh) * 2020-11-23 2021-03-09 青岛小鸟看看科技有限公司 虚拟现实设备的局部透视方法、装置及虚拟现实设备

Family Cites Families (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060017654A1 (en) * 2004-07-23 2006-01-26 Romo Justin R Virtual reality interactivity system and method
EP2996017B1 (en) * 2014-09-11 2022-05-11 Nokia Technologies Oy Method, apparatus and computer program for displaying an image of a physical keyboard on a head mountable display
US9953216B2 (en) * 2015-01-13 2018-04-24 Google Llc Systems and methods for performing actions in response to user gestures in captured images
US10656720B1 (en) * 2015-01-16 2020-05-19 Ultrahaptics IP Two Limited Mode switching for integrated gestural interaction and multi-user collaboration in immersive virtual reality environments
CN106484085B (zh) * 2015-08-31 2019-07-23 北京三星通信技术研究有限公司 在头戴式显示器中显示真实物体的方法及其头戴式显示器
US10338673B2 (en) * 2015-09-16 2019-07-02 Google Llc Touchscreen hover detection in an augmented and/or virtual reality environment
US20170256096A1 (en) * 2016-03-07 2017-09-07 Google Inc. Intelligent object sizing and placement in a augmented / virtual reality environment
EP3327544B1 (en) * 2016-11-25 2021-06-23 Nokia Technologies Oy Apparatus, associated method and associated computer readable medium
CN106845335B (zh) * 2016-11-29 2020-03-17 歌尔科技有限公司 用于虚拟现实设备的手势识别方法、装置及虚拟现实设备
WO2018196552A1 (zh) * 2017-04-25 2018-11-01 腾讯科技(深圳)有限公司 用于虚拟现实场景中的手型显示方法及装置
US10445935B2 (en) * 2017-05-26 2019-10-15 Microsoft Technology Licensing, Llc Using tracking to simulate direct tablet interaction in mixed reality
CN107272207A (zh) * 2017-07-20 2017-10-20 苏州普露信息咨询有限公司 一种带有透视的vr眼镜
CN108646997A (zh) * 2018-05-14 2018-10-12 刘智勇 一种虚拟及增强现实设备与其他无线设备进行交互的方法
US10600246B2 (en) * 2018-06-15 2020-03-24 Microsoft Technology Licensing, Llc Pinning virtual reality passthrough regions to real-world locations
US10747371B1 (en) * 2019-06-28 2020-08-18 Konica Minolta Business Solutions U.S.A., Inc. Detection of finger press from live video stream
US20210004146A1 (en) * 2019-07-01 2021-01-07 Microsoft Technology Licensing, Llc Virtual dial control
US10802600B1 (en) * 2019-09-20 2020-10-13 Facebook Technologies, Llc Virtual interactions at a distance
WO2021087450A1 (en) * 2019-11-01 2021-05-06 Raxium, Inc. Light field displays incorporating eye trackers and methods for generating views for a light field display using eye tracking information
US11113891B2 (en) * 2020-01-27 2021-09-07 Facebook Technologies, Llc Systems, methods, and media for displaying real-time visualization of physical environment in artificial reality
CN111708432B (zh) * 2020-05-21 2023-08-25 青岛小鸟看看科技有限公司 安全区域确定方法、装置、头戴显示设备及存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160054565A1 (en) * 2013-03-29 2016-02-25 Sony Corporation Information processing device, presentation state control method, and program
CN105324738A (zh) * 2013-06-07 2016-02-10 索尼电脑娱乐公司 在头戴式显示器中切换操作模式
CN108700936A (zh) * 2016-03-29 2018-10-23 谷歌有限责任公司 用于虚拟现实的直通相机用户界面元素
US20190320138A1 (en) * 2018-04-12 2019-10-17 Microsoft Technology Licensing, Llc Real-world awareness for virtual reality
CN112445341A (zh) * 2020-11-23 2021-03-05 青岛小鸟看看科技有限公司 虚拟现实设备的键盘透视方法、装置及虚拟现实设备
CN112462937A (zh) * 2020-11-23 2021-03-09 青岛小鸟看看科技有限公司 虚拟现实设备的局部透视方法、装置及虚拟现实设备

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4227773A4 *

Also Published As

Publication number Publication date
US20240061516A1 (en) 2024-02-22
CN112462937A (zh) 2021-03-09
EP4227773A1 (en) 2023-08-16
EP4227773A4 (en) 2024-04-24
CN112462937B (zh) 2022-11-08
US20220382380A1 (en) 2022-12-01
US11861071B2 (en) 2024-01-02

Similar Documents

Publication Publication Date Title
WO2022105919A1 (zh) 虚拟现实设备的局部透视方法、装置及虚拟现实设备
CN110581947B (zh) 在虚拟现实内拍照
RU2714096C1 (ru) Способ, оборудование и электронное устройство для обнаружения витальности лица
US11947729B2 (en) Gesture recognition method and device, gesture control method and device and virtual reality apparatus
WO2022105677A1 (zh) 虚拟现实设备的键盘透视方法、装置及虚拟现实设备
US9805516B2 (en) 3D holographic virtual object display controlling method based on human-eye tracking
WO2022237268A1 (zh) 头戴式显示设备的信息输入方法、装置及头戴式显示设备
TWI687901B (zh) 虛擬實境設備的安全監控方法、裝置及虛擬實境設備
TW202006595A (zh) 人臉識別方法及終端設備
WO2020024692A1 (zh) 一种人机交互方法和装置
CN108345848A (zh) 用户注视方向识别方法及相关产品
CN105430269B (zh) 一种应用于移动终端的拍照方法及装置
TWI752473B (zh) 圖像處理方法及裝置、電子設備和電腦可讀儲存媒體
WO2021196718A1 (zh) 关键点检测的方法和装置、电子设备、存储介质及计算机程序
CN105306819A (zh) 一种基于手势控制拍照的方法及装置
CN112541450A (zh) 情景感知功能控制方法及相关装置
CN112286364A (zh) 人机交互方法和装置
WO2021056450A1 (zh) 图像模板的更新方法、设备及存储介质
CN109961452A (zh) 照片的处理方法、装置、存储介质及电子设备
TWI544367B (zh) 手勢辨識與控制方法及其裝置
WO2020001016A1 (zh) 运动图像生成方法、装置、电子设备及计算机可读存储介质
CN111986229A (zh) 视频目标检测方法、装置及计算机系统
CN110363136A (zh) 用于识别眼睛设定特征的方法、装置、电子设备、及介质
CN110866508B (zh) 识别目标对象的形态的方法、装置、终端及存储介质
CN114281236A (zh) 文本处理方法、装置、设备、介质和程序产品

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21894066

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021894066

Country of ref document: EP

Effective date: 20230512

NENP Non-entry into the national phase

Ref country code: DE