WO2018076912A1 - 一种虚拟场景调整方法及头戴式智能设备 - Google Patents
一种虚拟场景调整方法及头戴式智能设备 Download PDFInfo
- Publication number
- WO2018076912A1 WO2018076912A1 PCT/CN2017/098793 CN2017098793W WO2018076912A1 WO 2018076912 A1 WO2018076912 A1 WO 2018076912A1 CN 2017098793 W CN2017098793 W CN 2017098793W WO 2018076912 A1 WO2018076912 A1 WO 2018076912A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- gesture
- virtual scene
- data
- adjustment
- adjustment instruction
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/012—Head tracking input arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
Definitions
- the present invention relates to the field of electronics, and in particular, to a virtual scene adjustment method and a head-mounted smart device.
- VR Virtual Reality
- the reason why motion sickness occurs is because of sensory conflicts.
- the main source of motion sickness is the movement in virtual reality and the screen switching. Because of the technical reasons, users can't simulate virtual reality movement in the real world, so the problem of motion sickness is widespread.
- the main solution is to reduce the inconsistency between the action in the virtual reality and the action in the real world by the hardware method to accurately and quickly track the user's whole body motion.
- the software method is used to realize the comfortable virtual. Realistic interactive experience.
- these methods have problems such as high cost, limited limitations, and poor results.
- the technical problem to be solved by the present invention is to provide a virtual scene adjustment method and a head-worn smart device, which can reduce the probability of occurrence of motion sickness and the impact thereof.
- the first technical solution adopted by the present invention is to provide a head-mounted smart device, comprising: a head collecting module for collecting head motion data; and a scene adjusting module for The motion data generates a first virtual scene adjustment instruction to adjust the virtual scene;
- the gesture collection module is configured to collect gesture image data;
- the gesture recognition module is configured to perform gesture recognition on the gesture image data to obtain gesture gesture data;
- the scene adjustment module further For adjusting the virtual scene by using the gesture gesture data to generate the second virtual scene adjustment instruction and replacing the first virtual scene adjustment instruction, where the gesture gesture data meets the preset criterion;
- the head-mounted smart device further includes a speed setting module, configured to set a speed of adjusting the virtual scene by the second virtual scene adjustment instruction according to the user input;
- the scene adjustment module further includes: a scene adjustment unit, configured to adjust the speed according to the second virtual scene adjustment instruction to adjust the virtual scene;
- the head-mounted smart device further includes: a gesture self-learning module, configured to perform gesture self-learning on the gesture image data to perform pre-judgment when the gesture image data is incomplete, and acquire gesture gesture data.
- a gesture self-learning module configured to perform gesture self-learning on the gesture image data to perform pre-judgment when the gesture image data is incomplete, and acquire gesture gesture data.
- the second technical solution adopted by the present invention is to provide a virtual scene adjustment method, including: collecting head motion data; generating a first virtual scene adjustment instruction according to the head motion data to perform the virtual scene Adjusting; acquiring gesture image data; performing gesture recognition on the gesture image data to obtain gesture gesture data; and generating the second virtual scene adjustment instruction by using the gesture gesture data instead of the first virtual scene adjustment in the case that the gesture posture data meets the preset criterion Command to adjust the virtual scene.
- the third technical solution adopted by the present invention is to provide a head-mounted smart device, comprising: a processor, an inertial sensor and a binocular camera, and the inertial sensor and the binocular camera are connected to the processor through a bus.
- the inertial sensor is used to collect the head motion data;
- the processor is configured to generate the first virtual scene adjustment instruction according to the head motion data to adjust the virtual scene;
- the binocular camera is used to collect the gesture image data;
- the processor further uses Performing gesture recognition on the gesture image data to obtain the gesture gesture data; and when the gesture gesture data meets the preset criterion, generating the second virtual scene adjustment instruction by using the gesture gesture data and replacing the first virtual scene adjustment instruction to perform the virtual scene Adjustment.
- the present invention generates a second virtual scene adjustment instruction by using the gesture posture data and replaces the first virtual scene adjustment instruction, in the case that the gesture posture data meets the preset criterion.
- the virtual scene is adjusted so that the head motion and the hand motion satisfying the preset standard adjust the virtual scene separately, thereby preventing the obvious sensory conflict caused by the head motion and the hand motion simultaneously adjusting the virtual scene, thereby reducing the occurrence of motion sickness.
- the probability and the influence thereof, and the adjustment speed of the virtual scene can be controlled according to the moving speed of the key node of the gesture, so that the virtual scene can be adjusted at a speed adapted by the user, and the probability of the motion of the motion sickness is further reduced.
- FIG. 1 is a flowchart of a first embodiment of a virtual scene adjustment method according to the present invention
- FIG. 2 is a flowchart of a second embodiment of a virtual scene adjustment method according to the present invention.
- FIG. 3 is a flowchart of a third embodiment of a virtual scene adjustment method according to the present invention.
- FIG. 4 is a schematic structural diagram of a first embodiment of a head-mounted smart device according to the present invention.
- FIG. 5 is a schematic structural diagram of a second embodiment of a head-mounted smart device according to the present invention.
- FIG. 6 is a schematic structural view of a third embodiment of a head-mounted smart device according to the present invention.
- FIG. 7 is a schematic structural view of a fourth embodiment of the head-mounted smart device of the present invention.
- FIG. 1 is a flowchart of a first embodiment of a virtual scene adjustment method according to the present invention.
- the virtual scene adjustment method of the present invention includes:
- Step S101 collecting head motion data
- the head motion data includes at least an angle value of the head rotation and a change value of the relative position.
- the head motion data is from an inertial sensor deployed inside the head-mounted smart device.
- Inertial sensors are sensitive devices that use the principle of inertia and measurement techniques to sense the acceleration, position and attitude of the carrier motion, including gyroscopes and accelerometers.
- the gyroscope measures the angle value and direction of the head rotation
- the accelerometer measures the acceleration of the head rotation to calculate the distance of the head rotation, that is, the change value of the relative position of the head.
- Step S102 Generate a first virtual scene adjustment instruction according to the head motion data to adjust the virtual scene.
- the first virtual scene adjustment instruction includes at least a view adjustment instruction and a view range adjustment instruction; in the above application example, the first virtual scene view adjustment instruction is generated according to the angle value of the head rotation, according to the change value of the relative position of the head
- the first virtual scene view range adjustment instruction is generated, and the generated first virtual scene view adjustment instruction and the first virtual scene view range adjustment instruction are respectively adjusted to adjust the view angle and the view range of the virtual scene.
- the head collected by the inertial sensor is rotated 20 degrees to the left and the head is moved 20 cm to the left.
- the first virtual scene angle adjustment instruction can be generated to move the angle to the left by 20 degrees, and the first virtual scene has a field of view.
- the adjustment command moves the field of view to the left by 20 cm, and then moves the virtual scene angle to the left by 20 degrees according to the first virtual scene angle adjustment instruction and the first virtual scene field of view adjustment instruction, and the field of view is moved to the left by 20 cm.
- Step S103 collecting gesture image data
- the gesture image data is a gesture image captured by a binocular camera deployed on the head-mounted smart device.
- Step S104 Perform gesture recognition on the gesture image data to acquire gesture gesture data.
- the gesture gesture data includes at least a hand shape, a position of a key node of the gesture, a movement trajectory, and a movement speed.
- the binocular camera collects the gesture image data at the same time, compares the difference between the two images and calculates the depth information of the gesture distance camera according to the geometric principle, and then recognizes the hand shape according to the computer vision technology and the gesture recognition algorithm.
- the gesture key node refers to the root and the fingertip, and the gesture recognition algorithm can be selected according to the gesture recognition accuracy, which is not specifically limited herein.
- Step S105 In the case that the gesture posture data meets the preset criterion, the second virtual scene adjustment instruction is generated by using the gesture posture data and the virtual scene is adjusted instead of the first virtual scene adjustment instruction.
- the second virtual scene adjustment instruction is used to control at least one of moving, rotating, and zooming of the virtual scene.
- the preset gesture data for adjusting the virtual scene is preset in the head-mounted smart device, and the gesture gesture data acquired after the gesture recognition satisfies a preset criterion, that is, the matching ratio between the gesture gesture data and the preset gesture data reaches a preset value.
- the second virtual scene adjustment instruction may be generated by using the gesture gesture data, and the virtual scene may be moved, rotated, switched, or scaled instead of the first virtual scene adjustment instruction.
- the adjustment speed of the virtual scene is consistent with the moving speed of the key node of the gesture in the gesture and posture data.
- the preset gesture data preset for adjusting the virtual scene inside the head-mounted smart device includes: turning the palm, clenching the fist and moving, kneading or loosening the thumb and index finger of one hand, thumb and index finger of both hands.
- the type of the second virtual scene adjustment instruction associated with the preset gesture data is preset in the head smart device, that is, the palm
- the second virtual scene adjustment instruction type associated with the rotated preset gesture data is a virtual scene rotation instruction
- the second virtual scene adjustment instruction type associated with the clenched fist and the moved preset gesture data is a virtual scene movement instruction, a single hand thumb
- the second virtual scene adjustment instruction type associated with the preset gesture data that is pinched or released by the index finger is a virtual scene reduction or enlargement instruction
- the second virtual scene associated with the preset gesture data of the thumb and the index finger is pinched or released.
- the second type of virtual scene adjustment instruction to adjust the end of the instruction.
- the user can also customize the preset gesture data and its associated second virtual scene adjustment instruction type according to requirements.
- the gesture gesture data acquired after the gesture recognition is a clenched fist and moved 10 cm to the left
- the gesture gesture data matches the preset gripping fist and the gesture type of the movement gesture data, the position of the gesture key point, and the movement trajectory.
- the rate reaches 70%, it is determined that the gesture posture data meets a preset criterion, and the corresponding second virtual scene adjustment instruction is generated by using the gesture posture data, that is, an instruction to move the virtual scene to the left by 10 meters is generated, and the instruction is replaced by the instruction.
- a virtual scene adjustment command moves the virtual scene to the left by 10 meters.
- the virtual scene is continuously adjusted according to the first virtual scene adjustment instruction generated by the head motion data.
- the first virtual scene adjustment instruction is used to adjust one of the virtual scene movement, rotation, and zooming
- the virtual object presented in the corresponding virtual scene may be performed according to other control instructions generated by the gesture posture data.
- control For example, a shooting instruction is generated according to the gesture posture data, and then the virtual object is animated in the virtual scene according to the shooting instruction to indicate whether to hit the virtual object.
- the second virtual scene adjustment instruction is generated by using the gesture posture data, and the virtual scene is adjusted instead of the first virtual scene adjustment instruction, so that the head motion and the pre-satisfaction are satisfied.
- the standard hand motion is used to adjust the virtual scene separately, which effectively prevents the sensory conflict caused by adjusting the virtual scene due to the head motion and the hand motion, and can control the adjustment speed of the virtual scene according to the moving speed of the key node of the gesture, thereby The user adapts the speed to adjust the virtual scene to further reduce the probability of occurrence of motion sickness and its impact.
- the adjustment speed of the virtual scene may also be adjusted according to the user requirements.
- FIG. 2 is a flowchart of a second embodiment of a virtual scene adjustment method according to the present invention.
- the second embodiment of the virtual scene adjustment method of the present invention is based on the first embodiment of the virtual scene adjustment method of the present invention, and further includes:
- Step S201 Set an adjustment speed of the virtual scene by the second virtual scene adjustment instruction according to the user input
- the user may input at least one candidate adjustment speed for setting the adjustment speed of the virtual scene by the second virtual scene adjustment instruction.
- the user may input at least one candidate adjustment speed by using a voice or a click, and the adjustment speed may be a ratio of a gesture change speed to a virtual scene change speed in the second virtual scene adjustment instruction.
- the two input adjustment speeds of the user through voice input are 1:1 and 1:2, respectively, respectively, indicating that when the hand moves 1 cm per second in the second virtual scene adjustment instruction, the virtual scene corresponds to move 1 cm per second and when In the second virtual scene adjustment instruction, when the hand moves 1 cm per second, the virtual scene corresponds to 2 cm per second.
- Step S202 adjust the virtual scene according to the second virtual scene adjustment instruction to adjust the speed.
- step S202 the method includes:
- Step S2021 respectively generate a virtual scene preview image at different candidate adjustment speeds according to the second virtual scene adjustment instruction, and present the image to the user;
- Step S2022 The adjustment speed is specified from the candidate adjustment speeds according to the user's selection of the virtual scene preview image.
- the second virtual scene adjustment instruction moves the virtual scene to the right by 10 meters.
- the virtual scene is generated at the same adjustment speed as the gesture change speed.
- the user can also change the moving speed of the key node of the gesture by controlling the speed of the movement of the hand movement, so as to adjust the virtual scene at the speed adapted by the user in accordance with the above adjustment speed, thereby achieving the purpose of reducing the probability of occurrence of motion sickness.
- step S105 The steps of the present embodiment are performed before step S105, and the present embodiment can be combined with the first embodiment of the virtual scene adjustment method of the present invention.
- FIG. 3 is a flowchart of a third embodiment of a virtual scene adjustment method according to the present invention.
- the third embodiment of the virtual scene adjustment method of the present invention is based on the first embodiment of the virtual scene adjustment method of the present invention, and further includes:
- Step S301 Perform gesture self-learning on the gesture image data to perform pre-judgment when the gesture image data is incomplete, and acquire gesture gesture data.
- gesture self-learning adopts the principle of machine learning, simulates human learning behavior, improves the accuracy of gesture recognition by reorganizing existing gesture image data, and can pass existing gesture image data when gesture image data is incomplete.
- the corresponding gesture posture data is predicted to improve the intelligence of gesture recognition.
- step S301 the method includes:
- Step S3021 Generate a three-dimensional virtual gesture according to the gesture posture data and send gesture confirmation prompt information;
- the three-dimensional virtual gesture is generated according to the gesture gesture data acquired after the gesture recognition, and is presented to the user and sent to the gesture confirmation prompt information, so that the user confirms whether the gesture recognition is correct.
- Step S3022 determining whether the gesture confirmation information is received
- Step S3023 If the gesture confirmation information is not received, the step of gesture recognition is returned, otherwise the subsequent steps are performed.
- the user can confirm whether the three-dimensional virtual gesture is correct by voice or click, and if the three-dimensional virtual gesture is incorrect, that is, the gesture recognition error occurs, returning to the re-recognition, if the three-dimensional virtual gesture is correct, that is, If the gesture recognition is accurate, continue with the next steps.
- step S104 The steps of the present embodiment are performed after step S104, and the present embodiment may be combined with the first embodiment of the virtual scene adjustment method of the present invention.
- FIG. 4 is a schematic structural diagram of a first embodiment of a head-mounted smart device according to the present invention.
- the head-mounted smart device 40 includes: a head collection module 401, a scene adjustment module 402, a gesture collection module 403, and a gesture recognition module 404.
- the gesture collection module 403 is connected to the gesture recognition module 404.
- the head collection module 401 and the gesture recognition module 404 are respectively connected to the scene adjustment module 402.
- the head collection module 401 is configured to collect head motion data.
- the head motion data includes at least an angle value of the head rotation and a change value of the relative position.
- the head collection module 401 collects the head motion data and transmits it to the scene adjustment module 402 to adjust the virtual scene.
- the scene adjustment module 402 is configured to generate a first virtual scene adjustment instruction according to the head motion data to adjust the virtual scene;
- the scene adjustment module 402 generates a first virtual scene adjustment instruction according to the received head motion data to adjust the virtual scene.
- the received head motion data is that the head moves to the right by 10 degrees
- the generated first The virtual scene adjustment instruction is to move the virtual scene to the right by 10 degrees, and adjust the virtual scene by this instruction.
- the gesture collection module 403 is configured to collect gesture image data.
- the gesture recognition module 404 is configured to perform gesture recognition on the gesture image data to acquire gesture gesture data;
- the gesture gesture data includes at least a hand shape, a position of a key node of the gesture, a movement trajectory, and a movement speed.
- the gesture recognition module 404 compares the difference between the two images and calculates the depth information of the gesture distance camera according to the geometric principle, and then according to the computer vision technology and the gesture recognition algorithm.
- the hand shape, the position of the key node of the gesture, the movement track and the moving speed can be recognized.
- the gesture key node refers to the root and the fingertip, and the gesture recognition algorithm can be selected according to the gesture recognition accuracy, which is not specifically limited herein.
- the scene adjustment module 402 is further configured to: when the gesture gesture data meets the preset criterion, generate the second virtual scene adjustment instruction by using the gesture posture data and adjust the virtual scene instead of the first virtual scene adjustment instruction.
- the second virtual scene adjustment instruction is used to control at least one of moving, rotating, and zooming of the virtual scene.
- the preset gesture data for adjusting the virtual scene is preset in the head-mounted smart device, and the gesture gesture data acquired after the gesture recognition satisfies a preset criterion, that is, the matching ratio between the gesture gesture data and the preset gesture data reaches a preset value.
- the scene adjustment module 402 When the threshold is set, the scene adjustment module 402 generates a corresponding second virtual scene adjustment instruction by using the gesture posture data, and moves, rotates, switches, or zooms the virtual scene instead of the first virtual scene adjustment instruction.
- the virtual scene is adjusted, and the adjustment speed of the virtual scene is consistent with the moving speed of the key node of the gesture in the gesture and posture data.
- the scene adjustment module 402 After generating the adjustment end instruction according to the gesture posture data, the scene adjustment module 402 continues to adjust the virtual scene according to the first virtual scene adjustment instruction generated by the head motion data. Further, the scene adjustment module 402 may perform the adjustment of one of the virtual scene movement, the rotation, and the zoom by using the first virtual scene adjustment instruction, and may be presented in the corresponding virtual scene according to other control instructions generated by the gesture posture data.
- the virtual object is controlled. For example, a shooting instruction is generated according to the gesture posture data, and then the virtual object is animated in the virtual scene according to the shooting instruction to indicate whether to hit the virtual object.
- the headset smart device uses the gesture gesture data to generate a second virtual scene adjustment instruction and replaces the first virtual scene adjustment instruction to adjust the virtual scene, so that the gesture is met.
- the motion of the part and the hand movement that meets the preset criteria adjust the virtual scene separately, effectively preventing the sensory conflict caused by adjusting the virtual scene due to the head motion and the hand motion, and controlling the virtual scene according to the moving speed of the key node of the gesture.
- the speed is adjusted so that the virtual scene can be adjusted at a user-adapted speed, further reducing the probability of occurrence of motion sickness and the effect thereof.
- the head-mounted smart device can also adjust the adjustment speed of the virtual scene according to user requirements.
- FIG. 5 is a schematic structural diagram of a second embodiment of the head-mounted smart device according to the present invention.
- the structure of FIG. 5 is similar to that of FIG. 4 , and is not described here again.
- the difference is that the speed setting module 505 is further included, and the scene adjusting module 504 further includes a scene adjusting unit 5041 .
- the speed setting module 505 is connected to the gesture recognition module 503 and the scene adjustment module 504, respectively, for setting the adjustment speed of the virtual scene by the second virtual scene adjustment instruction according to the user input;
- the user may input at least one candidate adjustment speed by voice or click, etc.
- the adjustment speed is a ratio of the gesture change speed to the virtual scene change speed in the second virtual scene adjustment instruction.
- the user inputs a candidate adjustment speed by voice: 1:1.5, which means that when the hand moves 1 cm per second in the second virtual scene adjustment instruction, the virtual scene corresponds to moving 1.5 cm per second.
- the speed setting module 505 further includes:
- the scene preview unit 5051 is configured to separately generate a virtual scene preview image at different candidate adjustment speeds according to the second virtual scene adjustment instruction, and present the image to the user;
- the speed selecting unit 5052 is configured to specify an adjustment speed from the candidate adjustment speeds according to the user's selection of the virtual scene preview image.
- the scene preview unit 5051 respectively generates preview images of the virtual scenes according to different candidate adjustment speeds input by the user according to the second virtual scene adjustment instruction, and presents them to the user respectively;
- the speed selection unit 5052 receives the preview image selected by the user, and The adjustment speed of the corresponding virtual scene is adjusted by the corresponding adjustment speed.
- the scene adjustment unit 5041 is configured to adjust the virtual scene according to the second virtual scene adjustment instruction to adjust the speed.
- the head-mounted smart device sets the adjustment speed of the virtual scene by the second virtual scene adjustment instruction by the user input, so that the user can select a speed suitable for adjusting the virtual scene, thereby reducing the sensory conflict between the virtual reality and the real world. , thereby reducing the probability of occurrence of motion sickness.
- FIG. 6 is a schematic structural diagram of a third embodiment of the head-mounted smart device of the present invention. 6 is similar to the structure of FIG. 4, and is not described here again. The difference is that the gesture self-learning module 606 and the gesture confirmation prompting module 607 are further connected. The gesture self-learning module 606 is further connected to the gesture recognition module 603. The confirmation prompt module 607 also connects the gesture recognition module 603 and the scene adjustment module 604.
- the gesture self-learning module 606 is configured to perform gesture self-learning on the gesture image data to perform pre-judgment when the gesture image data is incomplete, and acquire gesture gesture data.
- the gesture self-learning module 606 adopts a machine learning principle to simulate a human learning behavior, improves the accuracy of gesture recognition by reorganizing existing gesture image data, and can pass an existing gesture when the gesture image data is incomplete.
- the image data predicts the corresponding gesture posture data, and improves the intelligence of the gesture recognition.
- the gesture confirmation prompt module 607 specifically includes:
- the gesture generating unit 6071 is configured to generate a three-dimensional virtual gesture according to the gesture gesture data and send the gesture confirmation prompt information;
- the gesture generating unit 6071 generates a three-dimensional virtual gesture according to the gesture gesture data acquired after the gesture recognition, presents the gesture to the user, and sends the gesture confirmation prompt information, so that the user confirms whether the gesture recognition is correct.
- the gesture confirmation unit 6071 is configured to determine whether the gesture confirmation information is received, and when the gesture confirmation information is not received, return to the gesture recognition module 603, and when receiving the gesture confirmation information, transmit the gesture gesture data to the scene adjustment module 604. To adjust the virtual scene.
- the user can confirm whether the three-dimensional virtual gesture is correct by voice or click, and if the three-dimensional virtual gesture is incorrect, that is, the gesture recognition error occurs, and the gesture confirmation unit 6071 does not receive the gesture confirmation information, then returns The gesture recognition module 603 re-identifies that if the three-dimensional virtual gesture is correct, that is, the gesture recognition is accurate, and the gesture confirmation unit 6071 receives the gesture confirmation information, the gesture gesture data is transmitted to the scene adjustment module 604 to continue the subsequent steps.
- the gesture confirmation module 607 can increase human-computer interaction and further improve the accuracy of gesture recognition.
- FIG. 7 is a schematic structural diagram of a fourth embodiment of the head-mounted smart device of the present invention.
- the head-mounted smart device 70 of the present invention includes a processor 701, a memory 702, an inertial sensor 703, a binocular camera 704, and a display 705, wherein the above components are connected to each other through a bus.
- the inertial sensor 703 is configured to collect head motion data
- the head motion data includes at least an angle value of the head rotation and a change value of the relative position.
- the inertial sensor 703 senses the angle value of the head rotation and the change value of the relative position and transmits it to the processor 701 to adjust the virtual scene.
- the processor 701 is configured to generate a first virtual scene adjustment instruction according to the head motion data to adjust the virtual scene.
- the processor 701 generates a first virtual scene adjustment instruction to adjust the virtual scene according to the received header motion data.
- the received head motion data is that the head moves to the right by 10 degrees
- the generated first virtual The scene adjustment instruction is to move the virtual scene to the right by 10 degrees, and adjust the virtual scene by this instruction.
- the binocular camera 704 is configured to collect gesture image data
- the binocular camera 704 captures gesture image data and transmits it to the processor 701 for gesture recognition.
- the processor 701 is further configured to perform gesture recognition on the gesture image data to acquire gesture gesture data, and generate a second virtual scene adjustment instruction by using the gesture gesture data instead of the first virtual scene adjustment if the gesture gesture data meets a preset criterion. Command to adjust the virtual scene.
- the gesture gesture data includes at least a hand shape, a position of a key node of the gesture, a movement trajectory, and a movement speed.
- the processor 701 compares the difference between the two images and calculates the depth information of the gesture distance camera according to the geometric principle, and then according to the computer vision technology and the gesture recognition algorithm. , can identify the hand shape, the position of the key node of the gesture, the movement track and the moving speed.
- the gesture key node refers to the root and the fingertip, and the gesture recognition algorithm can be selected according to the gesture recognition accuracy, which is not specifically limited herein.
- the second virtual scene adjustment instruction is used to control at least one of moving, rotating, and zooming of the virtual scene.
- the memory 702 pre-stores preset gesture data for adjusting the virtual scene, and the gesture gesture data acquired after the gesture recognition meets a preset criterion, that is, when the gesture gesture data and the preset gesture data matching rate reach a preset threshold, the processing is performed.
- the 701 uses the gesture gesture data to generate a corresponding second virtual scene adjustment command and moves, rotates, switches, or zooms the virtual scene instead of the first virtual scene adjustment command.
- the first virtual scene adjustment instruction cannot adjust the virtual scene, and the virtual scene The adjustment speed is consistent with the moving speed of the gesture key node in the gesture gesture data.
- the display 705 is configured to display a virtual scene, including a virtual scene in the adjustment process and an adjusted virtual scene.
- the processor 701 can also be used to execute instructions to implement the method provided by the second or third embodiment mode of the virtual scene change method of the present invention, or the method provided by any combination of the first to third embodiments without conflict.
- the head-mounted smart device includes a processor, a memory, an inertial sensor, a binocular camera, and a display.
- the head-mounted smart device can also add a speaker, a touch sensor, and a wireless transmission according to specific needs.
- Other components such as interfaces are not specifically limited here.
- the headset smart device uses the gesture gesture data to generate a second virtual scene adjustment instruction and replaces the first virtual scene adjustment instruction to adjust the virtual scene, so that the gesture is met.
- the motion of the part and the hand movement that meets the preset criteria adjust the virtual scene separately, effectively preventing the sensory conflict caused by adjusting the virtual scene due to the head motion and the hand motion, and controlling the virtual scene according to the moving speed of the key node of the gesture.
- the speed is adjusted so that the virtual scene can be adjusted at a user-adapted speed, further reducing the probability of occurrence of motion sickness and the effect thereof.
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- User Interface Of Digital Computer (AREA)
- Processing Or Creating Images (AREA)
Abstract
Description
Claims (13)
- 一种头戴式智能设备,其中,包括:头部采集模块,用于采集头部动作数据;场景调整模块,用于根据所述头部动作数据生成第一虚拟场景调整指令来对虚拟场景进行调整;手势采集模块,用于采集手势图像数据;手势识别模块,用于对所述手势图像数据进行手势识别以获取手势姿态数据;所述场景调整模块进一步用于在所述手势姿态数据满足预设标准的情况下,利用所述手势姿态数据生成第二虚拟场景调整指令并替代所述第一虚拟场景调整指令来对所述虚拟场景进行调整;其中,所述头戴式智能设备进一步包括速度设置模块,用于根据用户输入设置所述第二虚拟场景调整指令对所述虚拟场景的调整速度;所述场景调整模块进一步包括:场景调整单元,用于根据所述第二虚拟场景调整指令以所述调整速度来对所述虚拟场景进行调整;所述头戴式智能设备进一步包括:手势自学习模块,用于对所述手势图像数据进行手势自学习,以在所述手势图像数据不完整时进行预判,获取所述手势姿态数据。
- 根据权利要求1所述的头戴式智能设备,其中,所述速度设置模块包括:场景预览单元,用于根据所述第二虚拟场景调整指令以不同的候选调整速度分别生成虚拟场景预览图像,并呈现给用户;速度选择单元,用于根据用户对所述虚拟场景预览图像的选择从所述候选调整速度中指定所述调整速度。
- 根据权利要求1所述的头戴式智能设备,其中,所述第二虚拟场景调整指令用于控制所述虚拟场景的移动、转动以及缩放中的至少一者。
- 一种虚拟场景调整方法,其中,包括:采集头部动作数据;根据所述头部动作数据生成第一虚拟场景调整指令来对虚拟场景进行调整;采集手势图像数据;对所述手势图像数据进行手势识别以获取手势姿态数据;在所述手势姿态数据满足预设标准的情况下,利用所述手势姿态数据生成第二虚拟场景调整指令并替代所述第一虚拟场景调整指令来对所述虚拟场景进行调整。
- 根据权利要求4所述的方法,其中,所述利用所述手势姿态数据生成第二虚拟场景调整指令并替代所述第一虚拟场景调整指令来对所述虚拟场景进行调整的步骤之前,进一步包括:根据用户输入设置所述第二虚拟场景调整指令对所述虚拟场景的调整速度;所述利用所述手势姿态数据生成第二虚拟场景调整指令并替代所述第一虚拟场景调整指令来对所述虚拟场景进行调整的步骤包括:根据所述第二虚拟场景调整指令以所述调整速度来对所述虚拟场景进行调整。
- 根据权利要求5所述的方法,其中,所述根据用户输入设置所述第二虚拟场景调整指令对所述虚拟场景的调整速度的步骤包括:根据所述第二虚拟场景调整指令以不同的候选调整速度分别生成虚拟场景预览图像,并呈现给用户;根据用户对所述虚拟场景预览图像的选择从所述候选调整速度中指定所述调整速度。
- 根据权利要求4所述的方法,其中,所述第二虚拟场景调整指令用于控制所述虚拟场景的移动、转动以及缩放中的至少一者。
- 根据权利要求4所述的方法,其中,所述对所述手势图像数据进行手势识别以获取手势姿态数据之后,进一步包括:对所述手势图像数据进行手势自学习,以在所述手势图像数据不完整时进行预判,获取所述手势姿态数据。
- 一种头戴式智能设备,其中,包括:处理器,惯性传感器和双目摄像头,所述惯性传感器和所述双目摄像头通过总线与所述处理器连接;所述惯性传感器用于采集头部动作数据;所述处理器,用于根据所述头部动作数据生成第一虚拟场景调整指令来对虚拟场景进行调整;所述双目摄像头用于采集手势图像数据;所述处理器,进一步用于对所述手势图像数据进行手势识别以获取手势姿态数据;在所述手势姿态数据满足预设标准的情况下,利用所述手势姿态数据生成第二虚拟场景调整指令并替代所述第一虚拟场景调整指令来对所述虚拟场景进行调整。
- 根据权利要求9所述的头戴式智能设备,其中,所述处理器用于利用所述手势姿态数据生成第二虚拟场景调整指令并替代所述第一虚拟场景调整指令来对所述虚拟场景进行调整的步骤之前,进一步用于:根据用户输入设置所述第二虚拟场景调整指令对所述虚拟场景的调整速度;所述处理器利用所述手势姿态数据生成第二虚拟场景调整指令并替代所述第一虚拟场景调整指令来对所述虚拟场景进行调整具体包括:根据所述第二虚拟场景调整指令以所述调整速度来对所述虚拟场景进行调整。
- 根据权利要求10所述的头戴式智能设备,其中,所述处理器根据用户输入设置所述第二虚拟场景调整指令对所述虚拟场景的调整速度具体包括:根据所述第二虚拟场景调整指令以不同的候选调整速度分别生成虚拟场景预览图像,并呈现给用户;根据用户对所述虚拟场景预览图像的选择从所述候选调整速度中指定所述调整速度。
- 根据权利要求9所述的头戴式智能设备,其中,所述第二虚拟场景调整指令用于控制所述虚拟场景的移动、转动以及缩放中的至少一者。
- 根据权利要求9所述的头戴式智能设备,其中,所述处理器用于对所述手势图像数据进行手势识别以获取手势姿态数据之后,进一步用于:对所述手势图像数据进行手势自学习,以在所述手势图像数据不完整时进行预判,获取所述手势姿态数据。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610972547.7A CN106527709B (zh) | 2016-10-28 | 2016-10-28 | 一种虚拟场景调整方法及头戴式智能设备 |
CN201610972547.7 | 2016-10-28 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2018076912A1 true WO2018076912A1 (zh) | 2018-05-03 |
Family
ID=58349694
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2017/098793 WO2018076912A1 (zh) | 2016-10-28 | 2017-08-24 | 一种虚拟场景调整方法及头戴式智能设备 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN106527709B (zh) |
WO (1) | WO2018076912A1 (zh) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110688018A (zh) * | 2019-11-05 | 2020-01-14 | 广东虚拟现实科技有限公司 | 虚拟画面的控制方法、装置、终端设备及存储介质 |
CN111694427A (zh) * | 2020-05-13 | 2020-09-22 | 北京农业信息技术研究中心 | Ar虚拟摇蜜互动体验系统、方法、电子设备及存储介质 |
CN111741287A (zh) * | 2020-07-10 | 2020-10-02 | 南京新研协同定位导航研究院有限公司 | 一种mr眼镜利用位置信息触发内容的方法 |
CN116309850A (zh) * | 2023-05-17 | 2023-06-23 | 中数元宇数字科技(上海)有限公司 | 一种虚拟触控识别方法、设备及存储介质 |
Families Citing this family (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106527709B (zh) * | 2016-10-28 | 2020-10-02 | Tcl移动通信科技(宁波)有限公司 | 一种虚拟场景调整方法及头戴式智能设备 |
US10268263B2 (en) * | 2017-04-20 | 2019-04-23 | Microsoft Technology Licensing, Llc | Vestibular anchoring |
CN107479712B (zh) * | 2017-08-18 | 2020-08-04 | 北京小米移动软件有限公司 | 基于头戴式显示设备的信息处理方法及装置 |
CN107678539A (zh) * | 2017-09-07 | 2018-02-09 | 歌尔科技有限公司 | 用于头戴显示设备的显示方法及头戴显示设备 |
CN109511004B (zh) * | 2017-09-14 | 2023-09-01 | 中兴通讯股份有限公司 | 一种视频处理方法及装置 |
US11010436B1 (en) * | 2018-04-20 | 2021-05-18 | Facebook, Inc. | Engaging users by personalized composing-content recommendation |
CN110874132A (zh) * | 2018-08-29 | 2020-03-10 | 塔普翊海(上海)智能科技有限公司 | 头戴式虚实交互装置和虚实交互方法 |
CN111367414B (zh) * | 2020-03-10 | 2020-10-13 | 厦门络航信息技术有限公司 | 虚拟现实对象控制方法、装置、虚拟现实系统及设备 |
CN111415421B (zh) * | 2020-04-02 | 2024-03-19 | Oppo广东移动通信有限公司 | 虚拟物体控制方法、装置、存储介质及增强现实设备 |
CN111651052A (zh) * | 2020-06-10 | 2020-09-11 | 浙江商汤科技开发有限公司 | 虚拟沙盘的展示方法、装置、电子设备及存储介质 |
CN114153307A (zh) * | 2020-09-04 | 2022-03-08 | 中移(成都)信息通信科技有限公司 | 场景区块化处理方法、装置、电子设备及计算机存储介质 |
WO2022252150A1 (zh) * | 2021-06-02 | 2022-12-08 | 陈盈吉 | 避免动晕症发生的虚拟实境控制方法 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102460349A (zh) * | 2009-05-08 | 2012-05-16 | 寇平公司 | 使用运动和语音命令对主机应用进行远程控制 |
US20150346813A1 (en) * | 2014-06-03 | 2015-12-03 | Aaron Michael Vargas | Hands free image viewing on head mounted display |
CN105898346A (zh) * | 2016-04-21 | 2016-08-24 | 联想(北京)有限公司 | 控制方法、电子设备及控制系统 |
CN105988583A (zh) * | 2015-11-18 | 2016-10-05 | 乐视致新电子科技(天津)有限公司 | 手势控制方法及虚拟现实显示输出设备 |
CN106527709A (zh) * | 2016-10-28 | 2017-03-22 | 惠州Tcl移动通信有限公司 | 一种虚拟场景调整方法及头戴式智能设备 |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101489150B (zh) * | 2009-01-20 | 2010-09-01 | 北京航空航天大学 | 一种虚实混合的远程协同工作方法 |
CN102789313B (zh) * | 2012-03-19 | 2015-05-13 | 苏州触达信息技术有限公司 | 一种用户交互系统和方法 |
CN105975083B (zh) * | 2016-05-27 | 2019-01-18 | 北京小鸟看看科技有限公司 | 一种虚拟现实环境下的视觉校正方法 |
-
2016
- 2016-10-28 CN CN201610972547.7A patent/CN106527709B/zh active Active
-
2017
- 2017-08-24 WO PCT/CN2017/098793 patent/WO2018076912A1/zh active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102460349A (zh) * | 2009-05-08 | 2012-05-16 | 寇平公司 | 使用运动和语音命令对主机应用进行远程控制 |
US20150346813A1 (en) * | 2014-06-03 | 2015-12-03 | Aaron Michael Vargas | Hands free image viewing on head mounted display |
CN105988583A (zh) * | 2015-11-18 | 2016-10-05 | 乐视致新电子科技(天津)有限公司 | 手势控制方法及虚拟现实显示输出设备 |
CN105898346A (zh) * | 2016-04-21 | 2016-08-24 | 联想(北京)有限公司 | 控制方法、电子设备及控制系统 |
CN106527709A (zh) * | 2016-10-28 | 2017-03-22 | 惠州Tcl移动通信有限公司 | 一种虚拟场景调整方法及头戴式智能设备 |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110688018A (zh) * | 2019-11-05 | 2020-01-14 | 广东虚拟现实科技有限公司 | 虚拟画面的控制方法、装置、终端设备及存储介质 |
CN110688018B (zh) * | 2019-11-05 | 2023-12-19 | 广东虚拟现实科技有限公司 | 虚拟画面的控制方法、装置、终端设备及存储介质 |
CN111694427A (zh) * | 2020-05-13 | 2020-09-22 | 北京农业信息技术研究中心 | Ar虚拟摇蜜互动体验系统、方法、电子设备及存储介质 |
CN111741287A (zh) * | 2020-07-10 | 2020-10-02 | 南京新研协同定位导航研究院有限公司 | 一种mr眼镜利用位置信息触发内容的方法 |
CN111741287B (zh) * | 2020-07-10 | 2022-05-17 | 南京新研协同定位导航研究院有限公司 | 一种mr眼镜利用位置信息触发内容的方法 |
CN116309850A (zh) * | 2023-05-17 | 2023-06-23 | 中数元宇数字科技(上海)有限公司 | 一种虚拟触控识别方法、设备及存储介质 |
CN116309850B (zh) * | 2023-05-17 | 2023-08-08 | 中数元宇数字科技(上海)有限公司 | 一种虚拟触控识别方法、设备及存储介质 |
Also Published As
Publication number | Publication date |
---|---|
CN106527709A (zh) | 2017-03-22 |
CN106527709B (zh) | 2020-10-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2018076912A1 (zh) | 一种虚拟场景调整方法及头戴式智能设备 | |
WO2017075973A1 (zh) | 无人机操控界面交互方法、便携式电子设备和存储介质 | |
WO2012173373A2 (ko) | 가상터치를 이용한 3차원 장치 및 3차원 게임 장치 | |
WO2020040363A1 (ko) | 4d 아바타를 이용한 동작가이드장치 및 방법 | |
WO2015126197A1 (ko) | 카메라 중심의 가상터치를 이용한 원격 조작 장치 및 방법 | |
WO2018054056A1 (zh) | 一种互动式运动方法及头戴式智能设备 | |
WO2010056023A2 (en) | Method and device for inputting a user's instructions based on movement sensing | |
WO2013133583A1 (ko) | 실감 인터랙션을 이용한 인지재활 시스템 및 방법 | |
WO2020036786A1 (en) | Detection of unintentional movement of a user interface device | |
EP3622375A1 (en) | Method and wearable device for performing actions using body sensor array | |
EP3685248B1 (en) | Tracking of location and orientation of a virtual controller in a virtual reality system | |
WO2014135023A1 (zh) | 一种智能终端的人机交互方法及系统 | |
WO2013055024A1 (ko) | 로봇을 이용한 인지 능력 훈련 장치 및 그 방법 | |
WO2015165162A1 (zh) | 一种主机运动感测方法、组件及运动感测系统 | |
WO2022182096A1 (en) | Real-time limb motion tracking | |
WO2021066392A2 (ko) | 골프 스윙에 관한 정보를 추정하기 위한 방법, 디바이스 및 비일시성의 컴퓨터 판독 가능한 기록 매체 | |
WO2023074980A1 (ko) | 동작 인식 기반 상호작용 방법 및 기록 매체 | |
WO2018076454A1 (zh) | 一种数据处理方法及其相关设备 | |
CN115686193A (zh) | 一种增强现实环境下虚拟模型三维手势操纵方法及系统 | |
WO2022092589A1 (ko) | 인공지능 기반의 운동 코칭 장치 | |
JP2005046931A (ja) | ロボットアーム・ハンド操作制御方法、ロボットアーム・ハンド操作制御システム | |
WO2017219622A1 (zh) | 图像处理系统及方法 | |
WO2020224566A1 (zh) | 一种虚拟现实、增强现实、融合现实手部操作方法及装置 | |
WO2021177674A1 (ko) | 2차원 이미지로부터 사용자의 제스처를 추정하는 방법, 시스템 및 비일시성의 컴퓨터 판독 가능 기록 매체 | |
WO2016085122A1 (ko) | 사용자 패턴 기반의 동작 인식 보정 장치 및 그 방법 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 17865851 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 17865851 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 18.09.2019) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 17865851 Country of ref document: EP Kind code of ref document: A1 |