WO2018076912A1 - 一种虚拟场景调整方法及头戴式智能设备 - Google Patents

一种虚拟场景调整方法及头戴式智能设备 Download PDF

Info

Publication number
WO2018076912A1
WO2018076912A1 PCT/CN2017/098793 CN2017098793W WO2018076912A1 WO 2018076912 A1 WO2018076912 A1 WO 2018076912A1 CN 2017098793 W CN2017098793 W CN 2017098793W WO 2018076912 A1 WO2018076912 A1 WO 2018076912A1
Authority
WO
WIPO (PCT)
Prior art keywords
gesture
virtual scene
data
adjustment
adjustment instruction
Prior art date
Application number
PCT/CN2017/098793
Other languages
English (en)
French (fr)
Inventor
刘哲
Original Assignee
捷开通讯(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 捷开通讯(深圳)有限公司 filed Critical 捷开通讯(深圳)有限公司
Publication of WO2018076912A1 publication Critical patent/WO2018076912A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/012Head tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures

Definitions

  • the present invention relates to the field of electronics, and in particular, to a virtual scene adjustment method and a head-mounted smart device.
  • VR Virtual Reality
  • the reason why motion sickness occurs is because of sensory conflicts.
  • the main source of motion sickness is the movement in virtual reality and the screen switching. Because of the technical reasons, users can't simulate virtual reality movement in the real world, so the problem of motion sickness is widespread.
  • the main solution is to reduce the inconsistency between the action in the virtual reality and the action in the real world by the hardware method to accurately and quickly track the user's whole body motion.
  • the software method is used to realize the comfortable virtual. Realistic interactive experience.
  • these methods have problems such as high cost, limited limitations, and poor results.
  • the technical problem to be solved by the present invention is to provide a virtual scene adjustment method and a head-worn smart device, which can reduce the probability of occurrence of motion sickness and the impact thereof.
  • the first technical solution adopted by the present invention is to provide a head-mounted smart device, comprising: a head collecting module for collecting head motion data; and a scene adjusting module for The motion data generates a first virtual scene adjustment instruction to adjust the virtual scene;
  • the gesture collection module is configured to collect gesture image data;
  • the gesture recognition module is configured to perform gesture recognition on the gesture image data to obtain gesture gesture data;
  • the scene adjustment module further For adjusting the virtual scene by using the gesture gesture data to generate the second virtual scene adjustment instruction and replacing the first virtual scene adjustment instruction, where the gesture gesture data meets the preset criterion;
  • the head-mounted smart device further includes a speed setting module, configured to set a speed of adjusting the virtual scene by the second virtual scene adjustment instruction according to the user input;
  • the scene adjustment module further includes: a scene adjustment unit, configured to adjust the speed according to the second virtual scene adjustment instruction to adjust the virtual scene;
  • the head-mounted smart device further includes: a gesture self-learning module, configured to perform gesture self-learning on the gesture image data to perform pre-judgment when the gesture image data is incomplete, and acquire gesture gesture data.
  • a gesture self-learning module configured to perform gesture self-learning on the gesture image data to perform pre-judgment when the gesture image data is incomplete, and acquire gesture gesture data.
  • the second technical solution adopted by the present invention is to provide a virtual scene adjustment method, including: collecting head motion data; generating a first virtual scene adjustment instruction according to the head motion data to perform the virtual scene Adjusting; acquiring gesture image data; performing gesture recognition on the gesture image data to obtain gesture gesture data; and generating the second virtual scene adjustment instruction by using the gesture gesture data instead of the first virtual scene adjustment in the case that the gesture posture data meets the preset criterion Command to adjust the virtual scene.
  • the third technical solution adopted by the present invention is to provide a head-mounted smart device, comprising: a processor, an inertial sensor and a binocular camera, and the inertial sensor and the binocular camera are connected to the processor through a bus.
  • the inertial sensor is used to collect the head motion data;
  • the processor is configured to generate the first virtual scene adjustment instruction according to the head motion data to adjust the virtual scene;
  • the binocular camera is used to collect the gesture image data;
  • the processor further uses Performing gesture recognition on the gesture image data to obtain the gesture gesture data; and when the gesture gesture data meets the preset criterion, generating the second virtual scene adjustment instruction by using the gesture gesture data and replacing the first virtual scene adjustment instruction to perform the virtual scene Adjustment.
  • the present invention generates a second virtual scene adjustment instruction by using the gesture posture data and replaces the first virtual scene adjustment instruction, in the case that the gesture posture data meets the preset criterion.
  • the virtual scene is adjusted so that the head motion and the hand motion satisfying the preset standard adjust the virtual scene separately, thereby preventing the obvious sensory conflict caused by the head motion and the hand motion simultaneously adjusting the virtual scene, thereby reducing the occurrence of motion sickness.
  • the probability and the influence thereof, and the adjustment speed of the virtual scene can be controlled according to the moving speed of the key node of the gesture, so that the virtual scene can be adjusted at a speed adapted by the user, and the probability of the motion of the motion sickness is further reduced.
  • FIG. 1 is a flowchart of a first embodiment of a virtual scene adjustment method according to the present invention
  • FIG. 2 is a flowchart of a second embodiment of a virtual scene adjustment method according to the present invention.
  • FIG. 3 is a flowchart of a third embodiment of a virtual scene adjustment method according to the present invention.
  • FIG. 4 is a schematic structural diagram of a first embodiment of a head-mounted smart device according to the present invention.
  • FIG. 5 is a schematic structural diagram of a second embodiment of a head-mounted smart device according to the present invention.
  • FIG. 6 is a schematic structural view of a third embodiment of a head-mounted smart device according to the present invention.
  • FIG. 7 is a schematic structural view of a fourth embodiment of the head-mounted smart device of the present invention.
  • FIG. 1 is a flowchart of a first embodiment of a virtual scene adjustment method according to the present invention.
  • the virtual scene adjustment method of the present invention includes:
  • Step S101 collecting head motion data
  • the head motion data includes at least an angle value of the head rotation and a change value of the relative position.
  • the head motion data is from an inertial sensor deployed inside the head-mounted smart device.
  • Inertial sensors are sensitive devices that use the principle of inertia and measurement techniques to sense the acceleration, position and attitude of the carrier motion, including gyroscopes and accelerometers.
  • the gyroscope measures the angle value and direction of the head rotation
  • the accelerometer measures the acceleration of the head rotation to calculate the distance of the head rotation, that is, the change value of the relative position of the head.
  • Step S102 Generate a first virtual scene adjustment instruction according to the head motion data to adjust the virtual scene.
  • the first virtual scene adjustment instruction includes at least a view adjustment instruction and a view range adjustment instruction; in the above application example, the first virtual scene view adjustment instruction is generated according to the angle value of the head rotation, according to the change value of the relative position of the head
  • the first virtual scene view range adjustment instruction is generated, and the generated first virtual scene view adjustment instruction and the first virtual scene view range adjustment instruction are respectively adjusted to adjust the view angle and the view range of the virtual scene.
  • the head collected by the inertial sensor is rotated 20 degrees to the left and the head is moved 20 cm to the left.
  • the first virtual scene angle adjustment instruction can be generated to move the angle to the left by 20 degrees, and the first virtual scene has a field of view.
  • the adjustment command moves the field of view to the left by 20 cm, and then moves the virtual scene angle to the left by 20 degrees according to the first virtual scene angle adjustment instruction and the first virtual scene field of view adjustment instruction, and the field of view is moved to the left by 20 cm.
  • Step S103 collecting gesture image data
  • the gesture image data is a gesture image captured by a binocular camera deployed on the head-mounted smart device.
  • Step S104 Perform gesture recognition on the gesture image data to acquire gesture gesture data.
  • the gesture gesture data includes at least a hand shape, a position of a key node of the gesture, a movement trajectory, and a movement speed.
  • the binocular camera collects the gesture image data at the same time, compares the difference between the two images and calculates the depth information of the gesture distance camera according to the geometric principle, and then recognizes the hand shape according to the computer vision technology and the gesture recognition algorithm.
  • the gesture key node refers to the root and the fingertip, and the gesture recognition algorithm can be selected according to the gesture recognition accuracy, which is not specifically limited herein.
  • Step S105 In the case that the gesture posture data meets the preset criterion, the second virtual scene adjustment instruction is generated by using the gesture posture data and the virtual scene is adjusted instead of the first virtual scene adjustment instruction.
  • the second virtual scene adjustment instruction is used to control at least one of moving, rotating, and zooming of the virtual scene.
  • the preset gesture data for adjusting the virtual scene is preset in the head-mounted smart device, and the gesture gesture data acquired after the gesture recognition satisfies a preset criterion, that is, the matching ratio between the gesture gesture data and the preset gesture data reaches a preset value.
  • the second virtual scene adjustment instruction may be generated by using the gesture gesture data, and the virtual scene may be moved, rotated, switched, or scaled instead of the first virtual scene adjustment instruction.
  • the adjustment speed of the virtual scene is consistent with the moving speed of the key node of the gesture in the gesture and posture data.
  • the preset gesture data preset for adjusting the virtual scene inside the head-mounted smart device includes: turning the palm, clenching the fist and moving, kneading or loosening the thumb and index finger of one hand, thumb and index finger of both hands.
  • the type of the second virtual scene adjustment instruction associated with the preset gesture data is preset in the head smart device, that is, the palm
  • the second virtual scene adjustment instruction type associated with the rotated preset gesture data is a virtual scene rotation instruction
  • the second virtual scene adjustment instruction type associated with the clenched fist and the moved preset gesture data is a virtual scene movement instruction, a single hand thumb
  • the second virtual scene adjustment instruction type associated with the preset gesture data that is pinched or released by the index finger is a virtual scene reduction or enlargement instruction
  • the second virtual scene associated with the preset gesture data of the thumb and the index finger is pinched or released.
  • the second type of virtual scene adjustment instruction to adjust the end of the instruction.
  • the user can also customize the preset gesture data and its associated second virtual scene adjustment instruction type according to requirements.
  • the gesture gesture data acquired after the gesture recognition is a clenched fist and moved 10 cm to the left
  • the gesture gesture data matches the preset gripping fist and the gesture type of the movement gesture data, the position of the gesture key point, and the movement trajectory.
  • the rate reaches 70%, it is determined that the gesture posture data meets a preset criterion, and the corresponding second virtual scene adjustment instruction is generated by using the gesture posture data, that is, an instruction to move the virtual scene to the left by 10 meters is generated, and the instruction is replaced by the instruction.
  • a virtual scene adjustment command moves the virtual scene to the left by 10 meters.
  • the virtual scene is continuously adjusted according to the first virtual scene adjustment instruction generated by the head motion data.
  • the first virtual scene adjustment instruction is used to adjust one of the virtual scene movement, rotation, and zooming
  • the virtual object presented in the corresponding virtual scene may be performed according to other control instructions generated by the gesture posture data.
  • control For example, a shooting instruction is generated according to the gesture posture data, and then the virtual object is animated in the virtual scene according to the shooting instruction to indicate whether to hit the virtual object.
  • the second virtual scene adjustment instruction is generated by using the gesture posture data, and the virtual scene is adjusted instead of the first virtual scene adjustment instruction, so that the head motion and the pre-satisfaction are satisfied.
  • the standard hand motion is used to adjust the virtual scene separately, which effectively prevents the sensory conflict caused by adjusting the virtual scene due to the head motion and the hand motion, and can control the adjustment speed of the virtual scene according to the moving speed of the key node of the gesture, thereby The user adapts the speed to adjust the virtual scene to further reduce the probability of occurrence of motion sickness and its impact.
  • the adjustment speed of the virtual scene may also be adjusted according to the user requirements.
  • FIG. 2 is a flowchart of a second embodiment of a virtual scene adjustment method according to the present invention.
  • the second embodiment of the virtual scene adjustment method of the present invention is based on the first embodiment of the virtual scene adjustment method of the present invention, and further includes:
  • Step S201 Set an adjustment speed of the virtual scene by the second virtual scene adjustment instruction according to the user input
  • the user may input at least one candidate adjustment speed for setting the adjustment speed of the virtual scene by the second virtual scene adjustment instruction.
  • the user may input at least one candidate adjustment speed by using a voice or a click, and the adjustment speed may be a ratio of a gesture change speed to a virtual scene change speed in the second virtual scene adjustment instruction.
  • the two input adjustment speeds of the user through voice input are 1:1 and 1:2, respectively, respectively, indicating that when the hand moves 1 cm per second in the second virtual scene adjustment instruction, the virtual scene corresponds to move 1 cm per second and when In the second virtual scene adjustment instruction, when the hand moves 1 cm per second, the virtual scene corresponds to 2 cm per second.
  • Step S202 adjust the virtual scene according to the second virtual scene adjustment instruction to adjust the speed.
  • step S202 the method includes:
  • Step S2021 respectively generate a virtual scene preview image at different candidate adjustment speeds according to the second virtual scene adjustment instruction, and present the image to the user;
  • Step S2022 The adjustment speed is specified from the candidate adjustment speeds according to the user's selection of the virtual scene preview image.
  • the second virtual scene adjustment instruction moves the virtual scene to the right by 10 meters.
  • the virtual scene is generated at the same adjustment speed as the gesture change speed.
  • the user can also change the moving speed of the key node of the gesture by controlling the speed of the movement of the hand movement, so as to adjust the virtual scene at the speed adapted by the user in accordance with the above adjustment speed, thereby achieving the purpose of reducing the probability of occurrence of motion sickness.
  • step S105 The steps of the present embodiment are performed before step S105, and the present embodiment can be combined with the first embodiment of the virtual scene adjustment method of the present invention.
  • FIG. 3 is a flowchart of a third embodiment of a virtual scene adjustment method according to the present invention.
  • the third embodiment of the virtual scene adjustment method of the present invention is based on the first embodiment of the virtual scene adjustment method of the present invention, and further includes:
  • Step S301 Perform gesture self-learning on the gesture image data to perform pre-judgment when the gesture image data is incomplete, and acquire gesture gesture data.
  • gesture self-learning adopts the principle of machine learning, simulates human learning behavior, improves the accuracy of gesture recognition by reorganizing existing gesture image data, and can pass existing gesture image data when gesture image data is incomplete.
  • the corresponding gesture posture data is predicted to improve the intelligence of gesture recognition.
  • step S301 the method includes:
  • Step S3021 Generate a three-dimensional virtual gesture according to the gesture posture data and send gesture confirmation prompt information;
  • the three-dimensional virtual gesture is generated according to the gesture gesture data acquired after the gesture recognition, and is presented to the user and sent to the gesture confirmation prompt information, so that the user confirms whether the gesture recognition is correct.
  • Step S3022 determining whether the gesture confirmation information is received
  • Step S3023 If the gesture confirmation information is not received, the step of gesture recognition is returned, otherwise the subsequent steps are performed.
  • the user can confirm whether the three-dimensional virtual gesture is correct by voice or click, and if the three-dimensional virtual gesture is incorrect, that is, the gesture recognition error occurs, returning to the re-recognition, if the three-dimensional virtual gesture is correct, that is, If the gesture recognition is accurate, continue with the next steps.
  • step S104 The steps of the present embodiment are performed after step S104, and the present embodiment may be combined with the first embodiment of the virtual scene adjustment method of the present invention.
  • FIG. 4 is a schematic structural diagram of a first embodiment of a head-mounted smart device according to the present invention.
  • the head-mounted smart device 40 includes: a head collection module 401, a scene adjustment module 402, a gesture collection module 403, and a gesture recognition module 404.
  • the gesture collection module 403 is connected to the gesture recognition module 404.
  • the head collection module 401 and the gesture recognition module 404 are respectively connected to the scene adjustment module 402.
  • the head collection module 401 is configured to collect head motion data.
  • the head motion data includes at least an angle value of the head rotation and a change value of the relative position.
  • the head collection module 401 collects the head motion data and transmits it to the scene adjustment module 402 to adjust the virtual scene.
  • the scene adjustment module 402 is configured to generate a first virtual scene adjustment instruction according to the head motion data to adjust the virtual scene;
  • the scene adjustment module 402 generates a first virtual scene adjustment instruction according to the received head motion data to adjust the virtual scene.
  • the received head motion data is that the head moves to the right by 10 degrees
  • the generated first The virtual scene adjustment instruction is to move the virtual scene to the right by 10 degrees, and adjust the virtual scene by this instruction.
  • the gesture collection module 403 is configured to collect gesture image data.
  • the gesture recognition module 404 is configured to perform gesture recognition on the gesture image data to acquire gesture gesture data;
  • the gesture gesture data includes at least a hand shape, a position of a key node of the gesture, a movement trajectory, and a movement speed.
  • the gesture recognition module 404 compares the difference between the two images and calculates the depth information of the gesture distance camera according to the geometric principle, and then according to the computer vision technology and the gesture recognition algorithm.
  • the hand shape, the position of the key node of the gesture, the movement track and the moving speed can be recognized.
  • the gesture key node refers to the root and the fingertip, and the gesture recognition algorithm can be selected according to the gesture recognition accuracy, which is not specifically limited herein.
  • the scene adjustment module 402 is further configured to: when the gesture gesture data meets the preset criterion, generate the second virtual scene adjustment instruction by using the gesture posture data and adjust the virtual scene instead of the first virtual scene adjustment instruction.
  • the second virtual scene adjustment instruction is used to control at least one of moving, rotating, and zooming of the virtual scene.
  • the preset gesture data for adjusting the virtual scene is preset in the head-mounted smart device, and the gesture gesture data acquired after the gesture recognition satisfies a preset criterion, that is, the matching ratio between the gesture gesture data and the preset gesture data reaches a preset value.
  • the scene adjustment module 402 When the threshold is set, the scene adjustment module 402 generates a corresponding second virtual scene adjustment instruction by using the gesture posture data, and moves, rotates, switches, or zooms the virtual scene instead of the first virtual scene adjustment instruction.
  • the virtual scene is adjusted, and the adjustment speed of the virtual scene is consistent with the moving speed of the key node of the gesture in the gesture and posture data.
  • the scene adjustment module 402 After generating the adjustment end instruction according to the gesture posture data, the scene adjustment module 402 continues to adjust the virtual scene according to the first virtual scene adjustment instruction generated by the head motion data. Further, the scene adjustment module 402 may perform the adjustment of one of the virtual scene movement, the rotation, and the zoom by using the first virtual scene adjustment instruction, and may be presented in the corresponding virtual scene according to other control instructions generated by the gesture posture data.
  • the virtual object is controlled. For example, a shooting instruction is generated according to the gesture posture data, and then the virtual object is animated in the virtual scene according to the shooting instruction to indicate whether to hit the virtual object.
  • the headset smart device uses the gesture gesture data to generate a second virtual scene adjustment instruction and replaces the first virtual scene adjustment instruction to adjust the virtual scene, so that the gesture is met.
  • the motion of the part and the hand movement that meets the preset criteria adjust the virtual scene separately, effectively preventing the sensory conflict caused by adjusting the virtual scene due to the head motion and the hand motion, and controlling the virtual scene according to the moving speed of the key node of the gesture.
  • the speed is adjusted so that the virtual scene can be adjusted at a user-adapted speed, further reducing the probability of occurrence of motion sickness and the effect thereof.
  • the head-mounted smart device can also adjust the adjustment speed of the virtual scene according to user requirements.
  • FIG. 5 is a schematic structural diagram of a second embodiment of the head-mounted smart device according to the present invention.
  • the structure of FIG. 5 is similar to that of FIG. 4 , and is not described here again.
  • the difference is that the speed setting module 505 is further included, and the scene adjusting module 504 further includes a scene adjusting unit 5041 .
  • the speed setting module 505 is connected to the gesture recognition module 503 and the scene adjustment module 504, respectively, for setting the adjustment speed of the virtual scene by the second virtual scene adjustment instruction according to the user input;
  • the user may input at least one candidate adjustment speed by voice or click, etc.
  • the adjustment speed is a ratio of the gesture change speed to the virtual scene change speed in the second virtual scene adjustment instruction.
  • the user inputs a candidate adjustment speed by voice: 1:1.5, which means that when the hand moves 1 cm per second in the second virtual scene adjustment instruction, the virtual scene corresponds to moving 1.5 cm per second.
  • the speed setting module 505 further includes:
  • the scene preview unit 5051 is configured to separately generate a virtual scene preview image at different candidate adjustment speeds according to the second virtual scene adjustment instruction, and present the image to the user;
  • the speed selecting unit 5052 is configured to specify an adjustment speed from the candidate adjustment speeds according to the user's selection of the virtual scene preview image.
  • the scene preview unit 5051 respectively generates preview images of the virtual scenes according to different candidate adjustment speeds input by the user according to the second virtual scene adjustment instruction, and presents them to the user respectively;
  • the speed selection unit 5052 receives the preview image selected by the user, and The adjustment speed of the corresponding virtual scene is adjusted by the corresponding adjustment speed.
  • the scene adjustment unit 5041 is configured to adjust the virtual scene according to the second virtual scene adjustment instruction to adjust the speed.
  • the head-mounted smart device sets the adjustment speed of the virtual scene by the second virtual scene adjustment instruction by the user input, so that the user can select a speed suitable for adjusting the virtual scene, thereby reducing the sensory conflict between the virtual reality and the real world. , thereby reducing the probability of occurrence of motion sickness.
  • FIG. 6 is a schematic structural diagram of a third embodiment of the head-mounted smart device of the present invention. 6 is similar to the structure of FIG. 4, and is not described here again. The difference is that the gesture self-learning module 606 and the gesture confirmation prompting module 607 are further connected. The gesture self-learning module 606 is further connected to the gesture recognition module 603. The confirmation prompt module 607 also connects the gesture recognition module 603 and the scene adjustment module 604.
  • the gesture self-learning module 606 is configured to perform gesture self-learning on the gesture image data to perform pre-judgment when the gesture image data is incomplete, and acquire gesture gesture data.
  • the gesture self-learning module 606 adopts a machine learning principle to simulate a human learning behavior, improves the accuracy of gesture recognition by reorganizing existing gesture image data, and can pass an existing gesture when the gesture image data is incomplete.
  • the image data predicts the corresponding gesture posture data, and improves the intelligence of the gesture recognition.
  • the gesture confirmation prompt module 607 specifically includes:
  • the gesture generating unit 6071 is configured to generate a three-dimensional virtual gesture according to the gesture gesture data and send the gesture confirmation prompt information;
  • the gesture generating unit 6071 generates a three-dimensional virtual gesture according to the gesture gesture data acquired after the gesture recognition, presents the gesture to the user, and sends the gesture confirmation prompt information, so that the user confirms whether the gesture recognition is correct.
  • the gesture confirmation unit 6071 is configured to determine whether the gesture confirmation information is received, and when the gesture confirmation information is not received, return to the gesture recognition module 603, and when receiving the gesture confirmation information, transmit the gesture gesture data to the scene adjustment module 604. To adjust the virtual scene.
  • the user can confirm whether the three-dimensional virtual gesture is correct by voice or click, and if the three-dimensional virtual gesture is incorrect, that is, the gesture recognition error occurs, and the gesture confirmation unit 6071 does not receive the gesture confirmation information, then returns The gesture recognition module 603 re-identifies that if the three-dimensional virtual gesture is correct, that is, the gesture recognition is accurate, and the gesture confirmation unit 6071 receives the gesture confirmation information, the gesture gesture data is transmitted to the scene adjustment module 604 to continue the subsequent steps.
  • the gesture confirmation module 607 can increase human-computer interaction and further improve the accuracy of gesture recognition.
  • FIG. 7 is a schematic structural diagram of a fourth embodiment of the head-mounted smart device of the present invention.
  • the head-mounted smart device 70 of the present invention includes a processor 701, a memory 702, an inertial sensor 703, a binocular camera 704, and a display 705, wherein the above components are connected to each other through a bus.
  • the inertial sensor 703 is configured to collect head motion data
  • the head motion data includes at least an angle value of the head rotation and a change value of the relative position.
  • the inertial sensor 703 senses the angle value of the head rotation and the change value of the relative position and transmits it to the processor 701 to adjust the virtual scene.
  • the processor 701 is configured to generate a first virtual scene adjustment instruction according to the head motion data to adjust the virtual scene.
  • the processor 701 generates a first virtual scene adjustment instruction to adjust the virtual scene according to the received header motion data.
  • the received head motion data is that the head moves to the right by 10 degrees
  • the generated first virtual The scene adjustment instruction is to move the virtual scene to the right by 10 degrees, and adjust the virtual scene by this instruction.
  • the binocular camera 704 is configured to collect gesture image data
  • the binocular camera 704 captures gesture image data and transmits it to the processor 701 for gesture recognition.
  • the processor 701 is further configured to perform gesture recognition on the gesture image data to acquire gesture gesture data, and generate a second virtual scene adjustment instruction by using the gesture gesture data instead of the first virtual scene adjustment if the gesture gesture data meets a preset criterion. Command to adjust the virtual scene.
  • the gesture gesture data includes at least a hand shape, a position of a key node of the gesture, a movement trajectory, and a movement speed.
  • the processor 701 compares the difference between the two images and calculates the depth information of the gesture distance camera according to the geometric principle, and then according to the computer vision technology and the gesture recognition algorithm. , can identify the hand shape, the position of the key node of the gesture, the movement track and the moving speed.
  • the gesture key node refers to the root and the fingertip, and the gesture recognition algorithm can be selected according to the gesture recognition accuracy, which is not specifically limited herein.
  • the second virtual scene adjustment instruction is used to control at least one of moving, rotating, and zooming of the virtual scene.
  • the memory 702 pre-stores preset gesture data for adjusting the virtual scene, and the gesture gesture data acquired after the gesture recognition meets a preset criterion, that is, when the gesture gesture data and the preset gesture data matching rate reach a preset threshold, the processing is performed.
  • the 701 uses the gesture gesture data to generate a corresponding second virtual scene adjustment command and moves, rotates, switches, or zooms the virtual scene instead of the first virtual scene adjustment command.
  • the first virtual scene adjustment instruction cannot adjust the virtual scene, and the virtual scene The adjustment speed is consistent with the moving speed of the gesture key node in the gesture gesture data.
  • the display 705 is configured to display a virtual scene, including a virtual scene in the adjustment process and an adjusted virtual scene.
  • the processor 701 can also be used to execute instructions to implement the method provided by the second or third embodiment mode of the virtual scene change method of the present invention, or the method provided by any combination of the first to third embodiments without conflict.
  • the head-mounted smart device includes a processor, a memory, an inertial sensor, a binocular camera, and a display.
  • the head-mounted smart device can also add a speaker, a touch sensor, and a wireless transmission according to specific needs.
  • Other components such as interfaces are not specifically limited here.
  • the headset smart device uses the gesture gesture data to generate a second virtual scene adjustment instruction and replaces the first virtual scene adjustment instruction to adjust the virtual scene, so that the gesture is met.
  • the motion of the part and the hand movement that meets the preset criteria adjust the virtual scene separately, effectively preventing the sensory conflict caused by adjusting the virtual scene due to the head motion and the hand motion, and controlling the virtual scene according to the moving speed of the key node of the gesture.
  • the speed is adjusted so that the virtual scene can be adjusted at a user-adapted speed, further reducing the probability of occurrence of motion sickness and the effect thereof.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)

Abstract

一种虚拟场景调整方法及头戴式智能设备,所述方法包括:采集头部动作数据(S101);根据所述头部动作数据生成第一虚拟场景调整指令来对虚拟场景进行调整(S102);采集手势图像数据(S103);对所述手势图像数据进行手势识别以获取手势姿态数据(S104);在所述手势姿态数据满足预设标准的情况下,利用所述手势姿态数据生成第二虚拟场景调整指令并替代所述第一虚拟场景调整指令来对所述虚拟场景进行调整(S105)。通过上述方式,能够有效降低晕动症发生的概率和其产生的影响。

Description

一种虚拟场景调整方法及头戴式智能设备
【技术领域】
本发明涉及电子领域,特别是涉及一种虚拟场景调整方法及头戴式智能设备。
【背景技术】
目前虚拟现实(Virtual Reality,简称VR)产品体验中,困扰用户和开发者的最大一个问题就是晕动症。晕动症之所以会发生是因为感官冲突,简单来说,用户在虚拟现实中发生了移动,但在现实中却没有进行移动,这会造成感官冲突,从而引起晕动症。其中,晕动症最主要的来源是在虚拟现实中的移动以及画面切换,因为受限于技术原因,用户没法在现实世界中模拟虚拟现实运动,所以晕动症的问题普遍存在。
目前主要的解决方法中,一方面是通过硬件方法对用户全身动作的精确、快速追踪来降低虚拟现实中的动作与现实世界中的动作匹配不一致,另一方面是通过软件方法来实现舒适的虚拟现实交互体验。但这些方法都存在成本高昂、局限性大以及效果不佳等问题。
【发明内容】
本发明主要解决的技术问题是提供一种虚拟场景调整方法及头戴式智能设备,能够降低晕动症发生的概率和其产生的影响。
为解决上述技术问题,本发明采用的第一个技术方案是:提供一种头戴式智能设备,包括:头部采集模块,用于采集头部动作数据;场景调整模块,用于根据头部动作数据生成第一虚拟场景调整指令来对虚拟场景进行调整;手势采集模块,用于采集手势图像数据;手势识别模块,用于对手势图像数据进行手势识别以获取手势姿态数据;场景调整模块进一步用于在手势姿态数据满足预设标准的情况下,利用手势姿态数据生成第二虚拟场景调整指令并替代第一虚拟场景调整指令来对虚拟场景进行调整;
其中,该头戴式智能设备进一步包括速度设置模块,用于根据用户输入设置第二虚拟场景调整指令对虚拟场景的调整速度;
场景调整模块进一步包括:场景调整单元,用于根据第二虚拟场景调整指令以调整速度来对所述虚拟场景进行调整;
该头戴式智能设备进一步包括:手势自学习模块,用于对手势图像数据进行手势自学习,以在手势图像数据不完整时进行预判,获取手势姿态数据。
为解决上述技术问题,本发明采用的第二个技术方案是:提供一种虚拟场景调整方法,包括:采集头部动作数据;根据头部动作数据生成第一虚拟场景调整指令来对虚拟场景进行调整;采集手势图像数据;对手势图像数据进行手势识别以获取手势姿态数据;在手势姿态数据满足预设标准的情况下,利用手势姿态数据生成第二虚拟场景调整指令并替代第一虚拟场景调整指令来对虚拟场景进行调整。
为解决上述技术问题,本发明采用的第三个技术方案是:提供一种头戴式智能设备,包括:处理器,惯性传感器和双目摄像头,惯性传感器和双目摄像头通过总线与处理器连接;惯性传感器用于采集头部动作数据;处理器,用于根据头部动作数据生成第一虚拟场景调整指令来对虚拟场景进行调整;双目摄像头用于采集手势图像数据;处理器,进一步用于对手势图像数据进行手势识别以获取手势姿态数据;在手势姿态数据满足预设标准的情况下,利用手势姿态数据生成第二虚拟场景调整指令并替代第一虚拟场景调整指令来对虚拟场景进行调整。
本发明的有益效果是:区别于现有技术的情况,本发明在手势姿态数据满足预设标准的情况下,利用手势姿态数据生成第二虚拟场景调整指令并替代第一虚拟场景调整指令来对虚拟场景进行调整,使得头部动作及满足预设标准的手部动作分开调整虚拟场景,防止由于头部动作和手部动作同时调整虚拟场景造成较明显的感官冲突,从而降低晕动症发生的概率和其产生的影响,而且可以根据手势关键节点的移动速度控制虚拟场景的调整速度,从而可以用户适应的速度调整虚拟场景,进一步降低晕动症发送的概率。
【附图说明】
图1是本发明虚拟场景调整方法第一实施方式的流程图;
图2是本发明虚拟场景调整方法第二实施方式的流程图;
图3是本发明虚拟场景调整方法第三实施方式的流程图;
图4是本发明头戴式智能设备第一实施方式的结构示意图;
图5是本发明头戴式智能设备第二实施方式的结构示意图;
图6是本发明头戴式智能设备第三实施方式的结构示意图;
图7是本发明头戴式智能设备第四实施方式的结构示意图。
【具体实施方式】
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅是本发明的一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
请参阅图1,图1是本发明虚拟场景调整方法第一实施方式的流程图。如图1所示,本发明虚拟场景调整方法包括:
步骤S101:采集头部动作数据;
其中,头部动作数据至少包括头部转动的角度值和相对位置的变化值。
具体地,在一个应用例中,头部动作数据来自部署于头戴式智能设备内部的惯性传感器。惯性传感器是应用惯性原理和测量技术,感受载体运动的加速度、位置和姿态的敏感装置,包括陀螺仪和加速度计。其中,陀螺仪测量头部转动的角度值和方向,加速度计测量头部转动的加速度以计算出头部转动的距离,即头部相对位置的变化值。
步骤S102:根据头部动作数据生成第一虚拟场景调整指令来对虚拟场景进行调整;
具体地,第一虚拟场景调整指令至少包括视角调整指令和视野范围调整指令;在上述应用例中,根据头部转动的角度值生成第一虚拟场景视角调整指令,根据头部相对位置的变化值生成第一虚拟场景视野范围调整指令,采用生成的第一虚拟场景视角调整指令和第一虚拟场景视野范围调整指令分别调整虚拟场景的视角和视野范围。例如,惯性传感器采集到的头部向左转动20度,头部向左移动20厘米,根据上述数据可生成第一虚拟场景视角调整指令为视角向左移动20度,第一虚拟场景视野范围调整指令为视野范围向左移动20厘米,则根据第一虚拟场景视角调整指令和第一虚拟场景视野范围调整指令将虚拟场景视角向左移动20度,视野范围向左移动20厘米。
步骤S103:采集手势图像数据;
其中,手势图像数据是由部署于头戴式智能设备上的双目摄像头拍摄的手势图像。
步骤S104:对手势图像数据进行手势识别以获取手势姿态数据;
其中,手势姿态数据至少包括手型、手势关键节点的位置、移动轨迹和移动速度。
具体地,双目摄像头采集到同一时刻的手势图像数据后,比对两个图像的差别并根据几何原理计算手势距离摄像头的深度信息,然后根据计算机视觉技术和手势识别算法,可识别出手型、手势关键节点的位置、移动轨迹和移动速度。其中,手势关键节点是指根和指尖,手势识别算法可以根据手势识别精度选择,此处不做具体限定。
步骤S105:在手势姿态数据满足预设标准的情况下,利用手势姿态数据生成第二虚拟场景调整指令并替代第一虚拟场景调整指令来对虚拟场景进行调整。
其中,第二虚拟场景调整指令用于控制虚拟场景的移动、转动以及缩放中的至少一者。
具体地,头戴式智能设备内部预先设定用于调整虚拟场景的预设手势数据,在手势识别后获取的手势姿态数据满足预设标准,即手势姿态数据与预设手势数据匹配率达到预设阈值时,则可以利用手势姿态数据生成对应的第二虚拟场景调整指令并替代第一虚拟场景调整指令对虚拟场景进行移动、转动、切换或者缩放,此时第一虚拟场景调整指令无法调整虚拟场景,虚拟场景的调整速度与手势姿态数据中的手势关键节点的移动速度一致。
其中,头戴式智能设备内部预先设定用于调整虚拟场景的预设手势数据包括:手掌转动、握紧拳头并移动、单手大拇指和食指做捏合或松开、双手大拇指和食指做捏合或松开以及摊平手掌的手型、手势关键节点的位置和移动轨迹;而且,头戴式智能设备内部还预先设置有上述预设手势数据关联的第二虚拟场景调整指令类型,即手掌转动的预设手势数据关联的第二虚拟场景调整指令类型为虚拟场景转动指令,握紧拳头并移动的预设手势数据关联的第二虚拟场景调整指令类型为虚拟场景移动指令,单手大拇指和食指做捏合或松开的预设手势数据关联的第二虚拟场景调整指令类型为虚拟场景缩小或放大指令,双手大拇指和食指做捏合或松开的预设手势数据关联的第二虚拟场景调整指令类型为虚拟场景“靠近”或“远离”指令,摊平手掌的预设手势数据关联的第二虚拟场景调整指令类型为调整结束指令。当然,用户也可以根据需求自定义预设手势数据及其关联的第二虚拟场景调整指令类型。
例如,手势识别后获取的手势姿态数据为握紧拳头并向左移动10厘米,则该手势姿态数据与预设的握紧拳头并移动手势数据的手型、手势关键点的位置和移动轨迹匹配率达到70%时,判断该手势姿态数据满足预设标准,利用该手势姿态数据生成对应的第二虚拟场景调整指令,即生成将虚拟场景向左移动10米的指令,并以此指令替代第一虚拟场景调整指令将虚拟场景向左移动10米。
在根据手势姿态数据生成调整结束指令后,继续根据头部动作数据所生成的第一虚拟场景调整指令来对虚拟场景进行调整。进一步,在利用第一虚拟场景调整指令来对虚拟场景移动、转动以及缩放中的一者进行调整的同时,可以根据手势姿态数据所生成的其他控制指令对相应虚拟场景内所呈现的虚拟物体进行控制。例如,根据手势姿态数据生成射击指令,进而在虚拟场景内根据射击指令以动画方式改变虚拟物体来表示是否击中该虚拟物体。
本实施方式中,在手势姿态数据满足预设标准的情况下,利用手势姿态数据生成第二虚拟场景调整指令并替代第一虚拟场景调整指令来对虚拟场景进行调整,使得头部动作及满足预设标准的手部动作分开调整虚拟场景,有效防止由于头部动作和手部动作同时调整虚拟场景造成较明显的感官冲突,而且可以根据手势关键节点的移动速度控制虚拟场景的调整速度,从而可以用户适应的速度调整虚拟场景,进一步降低晕动症发生的概率和其产生的影响的目的。
另外,在其他实施方式中,利用手势姿态数据调整虚拟场景过程中,还可以根据用户需求设置调整虚拟场景的调整速度。
具体请参阅图2,图2是本发明虚拟场景调整方法第二实施方式的流程图。本发明虚拟场景调整方法第二实施方式是在本发明虚拟场景调整方法第一实施方式的基础上,进一步包括:
步骤S201:根据用户输入设置第二虚拟场景调整指令对虚拟场景的调整速度;
具体地,用户可以输入至少一个候选调整速度,用于设置第二虚拟场景调整指令对虚拟场景的调整速度。其中,用户可以通过语音或者点击等方式输入至少一个候选调整速度,该调整速度可以是第二虚拟场景调整指令中手势变化速度与虚拟场景变化速度的比值。例如,用户通过语音输入两个候选调整速度分别为1:1和1:2,分别表示当第二虚拟场景调整指令中手部每秒移动1厘米时,虚拟场景对应每秒移动1厘米和当第二虚拟场景调整指令中手部每秒移动1厘米时,虚拟场景对应每秒移动2厘米。
步骤S202:根据第二虚拟场景调整指令以调整速度来对虚拟场景进行调整。
其中,步骤S202之前包括:
步骤S2021:根据第二虚拟场景调整指令以不同的候选调整速度分别生成虚拟场景预览图像,并呈现给用户;
步骤S2022:根据用户对虚拟场景预览图像的选择从候选调整速度中指定调整速度。
例如,第二虚拟场景调整指令是将虚拟场景向右移动10米,当用户输入两个候选调整速度1:1和1:2后,分别生成以与手势变化速度相同的调整速度将虚拟场景向右移动10米的预览图像和以手势变化速度两倍的调整速度将虚拟场景向右移动10米的预览图像,并将两幅预览图像分别呈现给用户;用户可以根据自身感受选取自身较为适应的调整速度对应的其中一幅预览图像,则后续将该调整速度对虚拟图像进行调整。同时,用户也可以通过控制手部动作的变化速度改变手势关键节点的移动速度,从而配合上述调整速度以用户适应的速度调整虚拟场景,进而达到降低晕动症发生概率的目的。
本实施方式的步骤执行在步骤S105之前,本实施方式可以与本发明虚拟场景调整方法第一实施方式相结合。
请参阅图3,图3是本发明虚拟场景调整方法第三实施方式的流程图。本发明虚拟场景调整方法第三实施方式是在本发明虚拟场景调整方法第一实施方式的基础上,进一步包括:
步骤S301:对手势图像数据进行手势自学习,以在手势图像数据不完整时进行预判,获取手势姿态数据。
具体地,手势自学习采用机器学习原理,模拟人类的学习行为,通过重新组织已有的手势图像数据改善手势识别的准确率,并且可以在手势图像数据不完整时,通过已有的手势图像数据预判出对应的手势姿态数据,提高手势识别的智能性。
其中,步骤S301之后,包括:
步骤S3021:根据手势姿态数据生成三维虚拟手势并发送手势确认提示信息;
具体地,在一个应用例中,根据手势识别后获取的手势姿态数据生成三维虚拟手势,呈现给用户并发送手势确认提示信息,以便用户确认手势识别是否正确。
步骤S3022:判断是否接收到手势确认信息;
步骤S3023:若没有接收到手势确认信息,则返回手势识别的步骤,否则进行后续步骤。
具体地,上述应用例中,用户可以通过语音或者点击等方式确认该三维虚拟手势是否正确,若该三维虚拟手势不正确,即手势识别出错,则返回重新识别,若该三维虚拟手势正确,即手势识别准确,则继续后续步骤。
本实施方式的步骤执行在步骤S104之后,本实施方式可以与本发明虚拟场景调整方法第一实施方式相结合。
请参阅图4,图4是本发明头戴式智能设备第一实施方式的结构示意图。如图4所示,本发明头戴式智能设备40包括:头部采集模块401,场景调整模块402,手势采集模块403和手势识别模块404;其中,手势采集模块403与手势识别模块404连接,头部采集模块401、手势识别模块404分别与场景调整模块402连接。
头部采集模块401用于采集头部动作数据;
其中,头部动作数据至少包括头部转动的角度值和相对位置的变化值。
具体地,头部采集模块401采集头部动作数据并将其传输给场景调整模块402,以对虚拟场景进行调整。
场景调整模块402用于根据头部动作数据生成第一虚拟场景调整指令来对虚拟场景进行调整;
具体地,场景调整模块402根据接收到的头部动作数据生成第一虚拟场景调整指令对虚拟场景进行调整,例如接收到的头部动作数据是头部向右移动10度,则生成的第一虚拟场景调整指令为将虚拟场景向右移动10度,并以此指令调整虚拟场景。
手势采集模块403用于采集手势图像数据;
手势识别模块404用于对手势图像数据进行手势识别以获取手势姿态数据;
其中,手势姿态数据至少包括手型、手势关键节点的位置、移动轨迹和移动速度。
具体地,手势采集模块403采集到同一时刻的手势图像数据后,手势识别模块404比对两个图像的差别并根据几何原理计算手势距离摄像头的深度信息,然后根据计算机视觉技术和手势识别算法,可识别出手型、手势关键节点的位置、移动轨迹和移动速度。其中,手势关键节点是指根和指尖,手势识别算法可以根据手势识别精度选择,此处不做具体限定。
场景调整模块402进一步用于在手势姿态数据满足预设标准的情况下,利用手势姿态数据生成第二虚拟场景调整指令并替代第一虚拟场景调整指令来对虚拟场景进行调整。
其中,第二虚拟场景调整指令用于控制虚拟场景的移动、转动以及缩放中的至少一者。
具体地,头戴式智能设备内部预先设定用于调整虚拟场景的预设手势数据,在手势识别后获取的手势姿态数据满足预设标准,即手势姿态数据与预设手势数据匹配率达到预设阈值时,场景调整模块402利用手势姿态数据生成对应的第二虚拟场景调整指令并替代第一虚拟场景调整指令对虚拟场景进行移动、转动、切换或者缩放,此时第一虚拟场景调整指令无法调整虚拟场景,虚拟场景的调整速度与手势姿态数据中的手势关键节点的移动速度一致。
在根据手势姿态数据生成调整结束指令后,场景调整模块402继续根据头部动作数据所生成的第一虚拟场景调整指令来对虚拟场景进行调整。进一步,场景调整模块402在利用第一虚拟场景调整指令来对虚拟场景移动、转动以及缩放中的一者进行调整的同时,可以根据手势姿态数据所生成的其他控制指令对相应虚拟场景内所呈现的虚拟物体进行控制。例如,根据手势姿态数据生成射击指令,进而在虚拟场景内根据射击指令以动画方式改变虚拟物体来表示是否击中该虚拟物体。
上述实施方式中,头戴式智能设备在手势姿态数据满足预设标准的情况下,利用手势姿态数据生成第二虚拟场景调整指令并替代第一虚拟场景调整指令来对虚拟场景进行调整,使得头部动作及满足预设标准的手部动作分开调整虚拟场景,有效防止由于头部动作和手部动作同时调整虚拟场景造成较明显的感官冲突,而且可以根据手势关键节点的移动速度控制虚拟场景的调整速度,从而可以用户适应的速度调整虚拟场景,进一步降低晕动症发生的概率和其产生的影响的目的。
另外,在其他实施方式中,头戴式智能设备还可以根据用户需求设置调整虚拟场景的调整速度。
具体请参阅图5,图5是本发明头戴式智能设备第二实施方式的结构示意图。图5与图4结构类似,此处不再赘述,不同之处在于进一步包括速度设置模块505,场景调整模块504进一步包括场景调整单元5041。
速度设置模块505分别连接手势识别模块503和场景调整模块504,用于根据用户输入设置第二虚拟场景调整指令对虚拟场景的调整速度;
具体地,用户可以通过语音或者点击等方式输入至少一个候选调整速度,该调整速度是第二虚拟场景调整指令中手势变化速度与虚拟场景变化速度的比值。例如,用户通过语音输入一个候选调整速度分别为1:1.5,表示当第二虚拟场景调整指令中手部每秒移动1厘米时,虚拟场景对应每秒移动1.5厘米。
其中,速度设置模块505进一步包括:
场景预览单元5051,用于根据第二虚拟场景调整指令以不同的候选调整速度分别生成虚拟场景预览图像,并呈现给用户;
速度选择单元5052,用于根据用户对虚拟场景预览图像的选择从候选调整速度中指定调整速度。
具体地,场景预览单元5051根据第二虚拟场景调整指令以用户输入的不同候选调整速度分别生成虚拟场景的预览图像,并将其分别呈现给用户;速度选择单元5052接收用户选择的预览图像,并以其对应的调整速度作为后续调整虚拟场景的调整速度。
场景调整单元5041,用于根据第二虚拟场景调整指令以调整速度来对虚拟场景进行调整。
本实施方式中,头戴式智能设备通过用户输入设置第二虚拟场景调整指令对虚拟场景的调整速度,使得用户可以选择自身较为适宜的速度调整虚拟场景,从而减少虚拟现实与现实世界的感官冲突,进而降低晕动症发生的概率。
请参阅图6,图6是本发明头戴式智能设备第三实施方式的结构示意图。图6与图4结构类似,此处不再赘述,不同之处在于进一步包括依次连接的手势自学习模块606和手势确认提示模块607;其中,手势自学习模块606还连接手势识别模块603,手势确认提示模块607还连接手势识别模块603和场景调整模块604。
手势自学习模块606用于对手势图像数据进行手势自学习,以在手势图像数据不完整时进行预判,获取手势姿态数据。
具体地,手势自学习模块606采用机器学习原理,模拟人类的学习行为,通过重新组织已有的手势图像数据改善手势识别的准确率,并且可以在手势图像数据不完整时,通过已有的手势图像数据预判出对应的手势姿态数据,提高手势识别的智能性。
其中,手势确认提示模块607具体包括:
手势生成单元6071,用于根据手势姿态数据生成三维虚拟手势并发送手势确认提示信息;
具体地,在一个应用例中,手势生成单元6071根据手势识别后获取的手势姿态数据生成三维虚拟手势,呈现给用户并发送手势确认提示信息,以便用户确认手势识别是否正确。
手势确认单元6071,用于判断是否接收到手势确认信息,并在没有接收到手势确认信息时,返回手势识别模块603,在接收到手势确认信息时,将手势姿态数据传输给场景调整模块604,以对虚拟场景进行调整。
具体地,上述应用例中,用户可以通过语音或者点击等方式确认该三维虚拟手势是否正确,若该三维虚拟手势不正确,即手势识别出错,手势确认单元6071没有接收到手势确认信息,则返回手势识别模块603重新识别,若该三维虚拟手势正确,即手势识别准确,手势确认单元6071接收到手势确认信息,则将手势姿态数据传输给场景调整模块604以继续后续步骤。通过手势确认提示模块607可以增加人机交互,并进一步提高手势识别的准确性。
请参阅图7,图7是本发明头戴式智能设备第四实施方式的结构示意图。如图7所示,本发明头戴式智能设备70包括:处理器701,存储器702,惯性传感器703,双目摄像头704和显示器705,其中,上述部件均通过总线相互连接。
其中,惯性传感器703用于采集头部动作数据;
其中,头部动作数据至少包括头部转动的角度值和相对位置的变化值。
具体地,惯性传感器703感知头部转动的角度值和相对位置的变化值并将其传输给处理器701,以对虚拟场景进行调整。
处理器701用于根据头部动作数据生成第一虚拟场景调整指令来对虚拟场景进行调整;
具体地,处理器701根据接收到的头部动作数据生成第一虚拟场景调整指令对虚拟场景进行调整,例如接收到的头部动作数据是头部向右移动10度,则生成的第一虚拟场景调整指令为将虚拟场景向右移动10度,并以此指令调整虚拟场景。
双目摄像头704用于采集手势图像数据;
具体地,双目摄像头704拍摄手势图像数据,并将其发送给处理器701,以进行手势识别。
处理器701进一步用于对手势图像数据进行手势识别以获取手势姿态数据,并在手势姿态数据满足预设标准的情况下,利用手势姿态数据生成第二虚拟场景调整指令并替代第一虚拟场景调整指令来对虚拟场景进行调整。
其中,手势姿态数据至少包括手型、手势关键节点的位置、移动轨迹和移动速度。
具体地,处理器701接收到双目摄像头704发送的同一时刻的手势图像数据后,比对两个图像的差别并根据几何原理计算手势距离摄像头的深度信息,然后根据计算机视觉技术和手势识别算法,可识别出手型、手势关键节点的位置、移动轨迹和移动速度。其中,手势关键节点是指根和指尖,手势识别算法可以根据手势识别精度选择,此处不做具体限定。
其中,第二虚拟场景调整指令用于控制虚拟场景的移动、转动以及缩放中的至少一者。
具体地,存储器702预先保存有用于调整虚拟场景的预设手势数据,在手势识别后获取的手势姿态数据满足预设标准,即手势姿态数据与预设手势数据匹配率达到预设阈值时,处理器701利用手势姿态数据生成对应的第二虚拟场景调整指令并替代第一虚拟场景调整指令对虚拟场景进行移动、转动、切换或者缩放,此时第一虚拟场景调整指令无法调整虚拟场景,虚拟场景的调整速度与手势姿态数据中的手势关键节点的移动速度一致。
显示器705用于显示虚拟场景,包括调整过程中的虚拟场景和调整后的虚拟场景。
处理器701还可用于执行指令以实现本发明虚拟场景变换方法第二或第三实施例方式所提供的方法,或者第一至第三实施方式任意不冲突的组合所提供的方法。
本实施方式中,头戴式智能设备包括处理器、存储器、惯性传感器、双目摄像头、显示器,而在其他实施方式中,头戴式智能设备还可以根据具体需求增加扬声器、触觉传感器、无线传输接口等其他部件,此处不做具体限定。
上述实施方式中,头戴式智能设备在手势姿态数据满足预设标准的情况下,利用手势姿态数据生成第二虚拟场景调整指令并替代第一虚拟场景调整指令来对虚拟场景进行调整,使得头部动作及满足预设标准的手部动作分开调整虚拟场景,有效防止由于头部动作和手部动作同时调整虚拟场景造成较明显的感官冲突,而且可以根据手势关键节点的移动速度控制虚拟场景的调整速度,从而可以用户适应的速度调整虚拟场景,进一步降低晕动症发生的概率和其产生的影响的目的。
以上所述仅为本发明的实施方式,并非因此限制本发明的专利范围,凡是利用本发明说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本发明的专利保护范围内。

Claims (13)

  1. 一种头戴式智能设备,其中,包括:
    头部采集模块,用于采集头部动作数据;
    场景调整模块,用于根据所述头部动作数据生成第一虚拟场景调整指令来对虚拟场景进行调整;
    手势采集模块,用于采集手势图像数据;
    手势识别模块,用于对所述手势图像数据进行手势识别以获取手势姿态数据;
    所述场景调整模块进一步用于在所述手势姿态数据满足预设标准的情况下,利用所述手势姿态数据生成第二虚拟场景调整指令并替代所述第一虚拟场景调整指令来对所述虚拟场景进行调整;
    其中,所述头戴式智能设备进一步包括速度设置模块,用于根据用户输入设置所述第二虚拟场景调整指令对所述虚拟场景的调整速度;
    所述场景调整模块进一步包括:
    场景调整单元,用于根据所述第二虚拟场景调整指令以所述调整速度来对所述虚拟场景进行调整;
    所述头戴式智能设备进一步包括:
    手势自学习模块,用于对所述手势图像数据进行手势自学习,以在所述手势图像数据不完整时进行预判,获取所述手势姿态数据。
  2. 根据权利要求1所述的头戴式智能设备,其中,
    所述速度设置模块包括:
    场景预览单元,用于根据所述第二虚拟场景调整指令以不同的候选调整速度分别生成虚拟场景预览图像,并呈现给用户;
    速度选择单元,用于根据用户对所述虚拟场景预览图像的选择从所述候选调整速度中指定所述调整速度。
  3. 根据权利要求1所述的头戴式智能设备,其中,所述第二虚拟场景调整指令用于控制所述虚拟场景的移动、转动以及缩放中的至少一者。
  4. 一种虚拟场景调整方法,其中,包括:
    采集头部动作数据;
    根据所述头部动作数据生成第一虚拟场景调整指令来对虚拟场景进行调整;
    采集手势图像数据;
    对所述手势图像数据进行手势识别以获取手势姿态数据;
    在所述手势姿态数据满足预设标准的情况下,利用所述手势姿态数据生成第二虚拟场景调整指令并替代所述第一虚拟场景调整指令来对所述虚拟场景进行调整。
  5. 根据权利要求4所述的方法,其中,所述利用所述手势姿态数据生成第二虚拟场景调整指令并替代所述第一虚拟场景调整指令来对所述虚拟场景进行调整的步骤之前,进一步包括:
    根据用户输入设置所述第二虚拟场景调整指令对所述虚拟场景的调整速度;
    所述利用所述手势姿态数据生成第二虚拟场景调整指令并替代所述第一虚拟场景调整指令来对所述虚拟场景进行调整的步骤包括:
    根据所述第二虚拟场景调整指令以所述调整速度来对所述虚拟场景进行调整。
  6. 根据权利要求5所述的方法,其中,所述根据用户输入设置所述第二虚拟场景调整指令对所述虚拟场景的调整速度的步骤包括:
    根据所述第二虚拟场景调整指令以不同的候选调整速度分别生成虚拟场景预览图像,并呈现给用户;
    根据用户对所述虚拟场景预览图像的选择从所述候选调整速度中指定所述调整速度。
  7. 根据权利要求4所述的方法,其中,所述第二虚拟场景调整指令用于控制所述虚拟场景的移动、转动以及缩放中的至少一者。
  8. 根据权利要求4所述的方法,其中,所述对所述手势图像数据进行手势识别以获取手势姿态数据之后,进一步包括:
    对所述手势图像数据进行手势自学习,以在所述手势图像数据不完整时进行预判,获取所述手势姿态数据。
  9. 一种头戴式智能设备,其中,包括:处理器,惯性传感器和双目摄像头,所述惯性传感器和所述双目摄像头通过总线与所述处理器连接;
    所述惯性传感器用于采集头部动作数据;
    所述处理器,用于根据所述头部动作数据生成第一虚拟场景调整指令来对虚拟场景进行调整;
    所述双目摄像头用于采集手势图像数据;
    所述处理器,进一步用于对所述手势图像数据进行手势识别以获取手势姿态数据;在所述手势姿态数据满足预设标准的情况下,利用所述手势姿态数据生成第二虚拟场景调整指令并替代所述第一虚拟场景调整指令来对所述虚拟场景进行调整。
  10. 根据权利要求9所述的头戴式智能设备,其中,
    所述处理器用于利用所述手势姿态数据生成第二虚拟场景调整指令并替代所述第一虚拟场景调整指令来对所述虚拟场景进行调整的步骤之前,进一步用于:
    根据用户输入设置所述第二虚拟场景调整指令对所述虚拟场景的调整速度;
    所述处理器利用所述手势姿态数据生成第二虚拟场景调整指令并替代所述第一虚拟场景调整指令来对所述虚拟场景进行调整具体包括:
    根据所述第二虚拟场景调整指令以所述调整速度来对所述虚拟场景进行调整。
  11. 根据权利要求10所述的头戴式智能设备,其中,所述处理器根据用户输入设置所述第二虚拟场景调整指令对所述虚拟场景的调整速度具体包括:
    根据所述第二虚拟场景调整指令以不同的候选调整速度分别生成虚拟场景预览图像,并呈现给用户;
    根据用户对所述虚拟场景预览图像的选择从所述候选调整速度中指定所述调整速度。
  12. 根据权利要求9所述的头戴式智能设备,其中,所述第二虚拟场景调整指令用于控制所述虚拟场景的移动、转动以及缩放中的至少一者。
  13. 根据权利要求9所述的头戴式智能设备,其中,所述处理器用于对所述手势图像数据进行手势识别以获取手势姿态数据之后,进一步用于:
    对所述手势图像数据进行手势自学习,以在所述手势图像数据不完整时进行预判,获取所述手势姿态数据。
PCT/CN2017/098793 2016-10-28 2017-08-24 一种虚拟场景调整方法及头戴式智能设备 WO2018076912A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610972547.7A CN106527709B (zh) 2016-10-28 2016-10-28 一种虚拟场景调整方法及头戴式智能设备
CN201610972547.7 2016-10-28

Publications (1)

Publication Number Publication Date
WO2018076912A1 true WO2018076912A1 (zh) 2018-05-03

Family

ID=58349694

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/098793 WO2018076912A1 (zh) 2016-10-28 2017-08-24 一种虚拟场景调整方法及头戴式智能设备

Country Status (2)

Country Link
CN (1) CN106527709B (zh)
WO (1) WO2018076912A1 (zh)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110688018A (zh) * 2019-11-05 2020-01-14 广东虚拟现实科技有限公司 虚拟画面的控制方法、装置、终端设备及存储介质
CN111694427A (zh) * 2020-05-13 2020-09-22 北京农业信息技术研究中心 Ar虚拟摇蜜互动体验系统、方法、电子设备及存储介质
CN111741287A (zh) * 2020-07-10 2020-10-02 南京新研协同定位导航研究院有限公司 一种mr眼镜利用位置信息触发内容的方法
CN116309850A (zh) * 2023-05-17 2023-06-23 中数元宇数字科技(上海)有限公司 一种虚拟触控识别方法、设备及存储介质

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106527709B (zh) * 2016-10-28 2020-10-02 Tcl移动通信科技(宁波)有限公司 一种虚拟场景调整方法及头戴式智能设备
US10268263B2 (en) * 2017-04-20 2019-04-23 Microsoft Technology Licensing, Llc Vestibular anchoring
CN107479712B (zh) * 2017-08-18 2020-08-04 北京小米移动软件有限公司 基于头戴式显示设备的信息处理方法及装置
CN107678539A (zh) * 2017-09-07 2018-02-09 歌尔科技有限公司 用于头戴显示设备的显示方法及头戴显示设备
CN109511004B (zh) * 2017-09-14 2023-09-01 中兴通讯股份有限公司 一种视频处理方法及装置
US11010436B1 (en) * 2018-04-20 2021-05-18 Facebook, Inc. Engaging users by personalized composing-content recommendation
CN110874132A (zh) * 2018-08-29 2020-03-10 塔普翊海(上海)智能科技有限公司 头戴式虚实交互装置和虚实交互方法
CN111367414B (zh) * 2020-03-10 2020-10-13 厦门络航信息技术有限公司 虚拟现实对象控制方法、装置、虚拟现实系统及设备
CN111415421B (zh) * 2020-04-02 2024-03-19 Oppo广东移动通信有限公司 虚拟物体控制方法、装置、存储介质及增强现实设备
CN111651052A (zh) * 2020-06-10 2020-09-11 浙江商汤科技开发有限公司 虚拟沙盘的展示方法、装置、电子设备及存储介质
CN114153307A (zh) * 2020-09-04 2022-03-08 中移(成都)信息通信科技有限公司 场景区块化处理方法、装置、电子设备及计算机存储介质
WO2022252150A1 (zh) * 2021-06-02 2022-12-08 陈盈吉 避免动晕症发生的虚拟实境控制方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102460349A (zh) * 2009-05-08 2012-05-16 寇平公司 使用运动和语音命令对主机应用进行远程控制
US20150346813A1 (en) * 2014-06-03 2015-12-03 Aaron Michael Vargas Hands free image viewing on head mounted display
CN105898346A (zh) * 2016-04-21 2016-08-24 联想(北京)有限公司 控制方法、电子设备及控制系统
CN105988583A (zh) * 2015-11-18 2016-10-05 乐视致新电子科技(天津)有限公司 手势控制方法及虚拟现实显示输出设备
CN106527709A (zh) * 2016-10-28 2017-03-22 惠州Tcl移动通信有限公司 一种虚拟场景调整方法及头戴式智能设备

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101489150B (zh) * 2009-01-20 2010-09-01 北京航空航天大学 一种虚实混合的远程协同工作方法
CN102789313B (zh) * 2012-03-19 2015-05-13 苏州触达信息技术有限公司 一种用户交互系统和方法
CN105975083B (zh) * 2016-05-27 2019-01-18 北京小鸟看看科技有限公司 一种虚拟现实环境下的视觉校正方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102460349A (zh) * 2009-05-08 2012-05-16 寇平公司 使用运动和语音命令对主机应用进行远程控制
US20150346813A1 (en) * 2014-06-03 2015-12-03 Aaron Michael Vargas Hands free image viewing on head mounted display
CN105988583A (zh) * 2015-11-18 2016-10-05 乐视致新电子科技(天津)有限公司 手势控制方法及虚拟现实显示输出设备
CN105898346A (zh) * 2016-04-21 2016-08-24 联想(北京)有限公司 控制方法、电子设备及控制系统
CN106527709A (zh) * 2016-10-28 2017-03-22 惠州Tcl移动通信有限公司 一种虚拟场景调整方法及头戴式智能设备

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110688018A (zh) * 2019-11-05 2020-01-14 广东虚拟现实科技有限公司 虚拟画面的控制方法、装置、终端设备及存储介质
CN110688018B (zh) * 2019-11-05 2023-12-19 广东虚拟现实科技有限公司 虚拟画面的控制方法、装置、终端设备及存储介质
CN111694427A (zh) * 2020-05-13 2020-09-22 北京农业信息技术研究中心 Ar虚拟摇蜜互动体验系统、方法、电子设备及存储介质
CN111741287A (zh) * 2020-07-10 2020-10-02 南京新研协同定位导航研究院有限公司 一种mr眼镜利用位置信息触发内容的方法
CN111741287B (zh) * 2020-07-10 2022-05-17 南京新研协同定位导航研究院有限公司 一种mr眼镜利用位置信息触发内容的方法
CN116309850A (zh) * 2023-05-17 2023-06-23 中数元宇数字科技(上海)有限公司 一种虚拟触控识别方法、设备及存储介质
CN116309850B (zh) * 2023-05-17 2023-08-08 中数元宇数字科技(上海)有限公司 一种虚拟触控识别方法、设备及存储介质

Also Published As

Publication number Publication date
CN106527709A (zh) 2017-03-22
CN106527709B (zh) 2020-10-02

Similar Documents

Publication Publication Date Title
WO2018076912A1 (zh) 一种虚拟场景调整方法及头戴式智能设备
WO2017075973A1 (zh) 无人机操控界面交互方法、便携式电子设备和存储介质
WO2012173373A2 (ko) 가상터치를 이용한 3차원 장치 및 3차원 게임 장치
WO2020040363A1 (ko) 4d 아바타를 이용한 동작가이드장치 및 방법
WO2015126197A1 (ko) 카메라 중심의 가상터치를 이용한 원격 조작 장치 및 방법
WO2018054056A1 (zh) 一种互动式运动方法及头戴式智能设备
WO2010056023A2 (en) Method and device for inputting a user's instructions based on movement sensing
WO2013133583A1 (ko) 실감 인터랙션을 이용한 인지재활 시스템 및 방법
WO2020036786A1 (en) Detection of unintentional movement of a user interface device
EP3622375A1 (en) Method and wearable device for performing actions using body sensor array
EP3685248B1 (en) Tracking of location and orientation of a virtual controller in a virtual reality system
WO2014135023A1 (zh) 一种智能终端的人机交互方法及系统
WO2013055024A1 (ko) 로봇을 이용한 인지 능력 훈련 장치 및 그 방법
WO2015165162A1 (zh) 一种主机运动感测方法、组件及运动感测系统
WO2022182096A1 (en) Real-time limb motion tracking
WO2021066392A2 (ko) 골프 스윙에 관한 정보를 추정하기 위한 방법, 디바이스 및 비일시성의 컴퓨터 판독 가능한 기록 매체
WO2023074980A1 (ko) 동작 인식 기반 상호작용 방법 및 기록 매체
WO2018076454A1 (zh) 一种数据处理方法及其相关设备
CN115686193A (zh) 一种增强现实环境下虚拟模型三维手势操纵方法及系统
WO2022092589A1 (ko) 인공지능 기반의 운동 코칭 장치
JP2005046931A (ja) ロボットアーム・ハンド操作制御方法、ロボットアーム・ハンド操作制御システム
WO2017219622A1 (zh) 图像处理系统及方法
WO2020224566A1 (zh) 一种虚拟现实、增强现实、融合现实手部操作方法及装置
WO2021177674A1 (ko) 2차원 이미지로부터 사용자의 제스처를 추정하는 방법, 시스템 및 비일시성의 컴퓨터 판독 가능 기록 매체
WO2016085122A1 (ko) 사용자 패턴 기반의 동작 인식 보정 장치 및 그 방법

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17865851

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17865851

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 18.09.2019)

122 Ep: pct application non-entry in european phase

Ref document number: 17865851

Country of ref document: EP

Kind code of ref document: A1