WO2023178586A1 - 可穿戴设备的人机交互方法、可穿戴设备及存储介质 - Google Patents

可穿戴设备的人机交互方法、可穿戴设备及存储介质 Download PDF

Info

Publication number
WO2023178586A1
WO2023178586A1 PCT/CN2022/082674 CN2022082674W WO2023178586A1 WO 2023178586 A1 WO2023178586 A1 WO 2023178586A1 CN 2022082674 W CN2022082674 W CN 2022082674W WO 2023178586 A1 WO2023178586 A1 WO 2023178586A1
Authority
WO
WIPO (PCT)
Prior art keywords
wearable device
environment
body part
movement
mapping
Prior art date
Application number
PCT/CN2022/082674
Other languages
English (en)
French (fr)
Inventor
滕龙
李鑫超
朱梦龙
Original Assignee
深圳市闪至科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市闪至科技有限公司 filed Critical 深圳市闪至科技有限公司
Priority to PCT/CN2022/082674 priority Critical patent/WO2023178586A1/zh
Priority to CN202280048813.0A priority patent/CN117677919A/zh
Publication of WO2023178586A1 publication Critical patent/WO2023178586A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer

Definitions

  • the present application relates to the field of human-computer interaction, and in particular, to a human-computer interaction method for a wearable device, a wearable device, and a storage medium.
  • embodiments of the present application provide a human-computer interaction method for a wearable device, a wearable device, and a storage medium, aiming to improve the convenience and interactive immersion of human-computer interaction on a wearable device.
  • embodiments of the present application provide a human-computer interaction method for wearable devices, including:
  • the mapping generates a visual indicator to move in the depth direction in the 3D environment, and the visual indicator is used to select a target object in the 3D environment.
  • the wearable device displays a 3D environment, and uses an image sensor provided on the wearable device to identify the movement and/or operation of at least one body part of the user wearing the wearable device. According to the recognized The movement and/or operation of at least one body part is mapped to generate a visual indicator to move in the depth direction of the displayed 3D environment, so that the user can communicate with the user through the contact between body parts, the movement and/or operation of the body part, Wearable devices for human-computer interaction can bring interactive feedback to users, which greatly improves the convenience and interactive immersion of human-computer interaction with wearable devices.
  • embodiments of the present application also provide a human-computer interaction method for wearable devices, which is characterized by including:
  • mapping According to the movement of the anchor point, mapping generates visual indicators of movement within the 3D environment
  • the wearable device displays a 3D environment, and uses an image sensor provided on the wearable device to identify the positioning point of at least one body part of the user wearing the wearable device, and based on the movement of the identified positioning point , mapping generates visual indicators of movement in the 3D environment, and generates corresponding control instructions based on the user's operation of the anchor points, allowing the user to interact with the wearable device through the movement or operation of the anchor points of the body part. It can bring interactive feedback to users, greatly improving the convenience and interactive immersion of human-computer interaction with wearable devices.
  • embodiments of the present application also provide a wearable device, which includes: a display device, an image sensor, a memory, and a processor;
  • the display device is used to display a 3D environment
  • the image sensor is used to capture images
  • the memory is used to store computer programs
  • the processor is used to execute the computer program and when executing the computer program, implement the following steps:
  • the mapping Based on the movement and/or operation, the mapping generates a visual indicator indicating movement in the depth direction within the 3D environment.
  • embodiments of the present application further provide a wearable device, which includes: a display device, an image sensor, a memory, and a processor;
  • the display device is used to display a 3D environment
  • the image sensor is used to capture images
  • the memory is used to store computer programs
  • the processor is used to execute the computer program and when executing the computer program, implement the following steps:
  • mapping According to the movement of the anchor point, mapping generates visual indicators of movement within the 3D environment
  • embodiments of the present application further provide a storage medium that stores a computer program.
  • the processor When the computer program is executed by a processor, the processor enables the processor to implement the human-machine function of the wearable device as described above. Interactive methods.
  • Figure 1 is a schematic diagram of a scenario for implementing a human-computer interaction method for a wearable device provided by an embodiment of the present application
  • Figure 2 is a schematic diagram of another scenario for implementing the human-computer interaction method of the wearable device provided by the embodiment of the present application;
  • Figure 3 is a schematic flow chart of the steps of a human-computer interaction method for a wearable device provided by an embodiment of the present application;
  • Figure 4 is a schematic diagram of a gesture provided by an embodiment of the present application.
  • Figure 5 is a schematic diagram of identifying a control area on a mapping object corresponding to a body part in an embodiment of the present application
  • Figure 6 is another schematic diagram of identifying a control area on a mapping object corresponding to a body part in an embodiment of the present application
  • Figure 7 is a schematic diagram of the movement direction of the visual indicator in the 3D environment in the embodiment of the present application.
  • Figure 8 is a schematic diagram of the rotation trajectory in the embodiment of the present application.
  • Figure 9 is another gesture diagram provided by an embodiment of the present application.
  • Figure 10 is a schematic diagram of a virtual input keyboard marked on the mapping object corresponding to the hand in the embodiment of the present application;
  • Figure 11 is another schematic diagram of a virtual input keyboard marked on the mapping object corresponding to the hand in the embodiment of the present application;
  • Figure 12 is another schematic diagram of a virtual input keyboard marked on the mapping object corresponding to the hand in the embodiment of the present application;
  • Figure 13 is another schematic diagram of a virtual input keyboard marked on the mapping object corresponding to the hand in the embodiment of the present application;
  • Figure 14 is another gesture diagram provided by an embodiment of the present application.
  • Figure 15 is a schematic diagram of marking the virtual input keyboard and control area on the hand according to the embodiment of the present application.
  • Figure 16 is a schematic flow chart of steps of another human-computer interaction method for wearable devices provided by an embodiment of the present application.
  • Figure 17 is another gesture diagram provided by an embodiment of the present application.
  • Figure 18 is a schematic diagram of identifying anchor points on the mapping object corresponding to the hand in the embodiment of the present application.
  • Figure 19 is a schematic diagram of visual indication marks displayed on the mapping object corresponding to the hand in the embodiment of the present application.
  • Figure 20 is another schematic diagram of visual indication marks displayed on the mapping object corresponding to the hand in the embodiment of the present application.
  • Figure 21 is a schematic diagram of a user operation anchor point in an embodiment of the present application.
  • Figure 22 is a schematic structural block diagram of a wearable device provided by an embodiment of the present application.
  • FIG. 1 is a schematic diagram of a scenario for implementing the human-computer interaction method of a wearable device provided by an embodiment of the present application.
  • the wearable device 100 includes a display device 110 and an image sensor 120 , wherein the display device 110 is used to display a 3D environment, and the image sensor 120 is used to identify the movement of at least one body part of a user wearing the wearable device 100 And/or operate, or the image sensor 120 is used to identify a positioning point of at least one body part of the user wearing the wearable device 100, where the positioning point is a key point of the body part, for example, the positioning point is a joint point of the user's finger.
  • the wearable device 100 can display a 3D environment in the form of virtual reality (Virtual Reality, VR), augmented reality (Augmented Reality, AR) or mixed reality (Mixed Reality, MR).
  • virtual reality Virtual Reality
  • AR Augmented Reality
  • MR Mixed Reality
  • the 3D environment displayed in the form of virtual reality does not include the real environment
  • the 3D environment displayed in the form of augmented reality includes the virtual environment and the real environment
  • the 3D environment displayed in the form of mixed reality includes the virtual environment and the real environment.
  • the wearable device 100 recognizes the movement and/or operation of at least one body part of the user wearing the wearable device through the image sensor 120; based on the recognized movement and/or operation of the at least one body part, mapping generates visual
  • the indicator moves in the depth direction in the 3D environment, and the visual indicator is used to select a target object in the 3D environment.
  • the target object may be an object in a virtual environment or an object in a real environment, and the visual indicator may include a cursor.
  • the wearable device 100 identifies the positioning point of at least one body part of the user wearing the wearable device through the image sensor 120; based on the movement of the recognized positioning point, mapping generates visual indicators of movement within the 3D environment, The visual indicator is used to select a target object in the 3D environment; according to the user's operation on the anchor point, the object at the location of the visual indicator is determined as the selected target object.
  • FIG. 2 is a schematic diagram of another scenario for implementing the human-computer interaction method of a wearable device provided by an embodiment of the present application.
  • this scene includes a wearable device 100 and a movable platform 200.
  • the wearable device 100 is communicatively connected with the movable platform 200, and the wearable device 100 is used to display images transmitted by the movable platform 200.
  • the movable platform 200 includes a platform body 210, a power system 220 and a control system (not shown in Figure 2) provided on the platform body 210.
  • the power system 220 is used to provide moving power for the platform body 210.
  • the power system 220 may include one or more propellers 221, one or more motors 222 corresponding to the one or more propellers, and one or more electronic speed regulators (referred to as electric speed regulators for short).
  • the motor 222 is connected between the electronic speed regulator and the propeller 221, and the motor 222 and the propeller 221 are arranged on the platform body 210 of the movable platform 200; the electronic speed regulator is used to receive the driving signal generated by the control system, and adjust the driving signal according to the driving signal.
  • a driving current is provided to the motor 222 to control the rotation speed of the motor 222 .
  • the motor 222 is used to drive the propeller 221 to rotate, thereby providing power for the movement of the movable platform 200. The power enables the movable platform 200 to achieve movement with one or more degrees of freedom.
  • movable platform 200 may rotate about one or more axes of rotation.
  • the above-mentioned rotation axis may include a roll axis, a yaw axis, and a pitch axis.
  • the motor 222 may be a DC motor or an AC motor.
  • the motor 222 may be a brushless motor or a brushed motor.
  • the control system may include a controller and a sensing system.
  • the sensing system is used to measure the posture information of the movable platform, that is, the position information and status information of the movable platform 200 in space, such as three-dimensional position, three-dimensional angle, three-dimensional velocity, three-dimensional acceleration, and three-dimensional angular velocity.
  • the sensing system may include, for example, at least one of a gyroscope, an ultrasonic sensor, an electronic compass, an inertial measurement unit (IMU), a vision sensor, a global navigation satellite system, a barometer, and other sensors.
  • the global navigation satellite system may be the Global Positioning System (GPS).
  • the controller is used to control the movement of the movable platform 200.
  • the movement of the movable platform 200 can be controlled based on attitude information measured by the sensing system. It should be understood that the controller can control the movable platform 200 according to pre-programmed instructions.
  • the wearable device 100 includes a display device 110 and an image sensor 120, wherein the display device 110 is used to display a 3D environment, and the image sensor 120 is used to identify the movement and/or movement of at least one body part of the user wearing the wearable device 100. Or operate, or the image sensor 120 is used to identify an anchor point of at least one body part of a user wearing the wearable device 100 .
  • the wearable device 100 identifies the positioning point of at least one body part of the user wearing the wearable device through the image sensor 120; based on the movement of the recognized positioning point, mapping generates a visual indicator of the movement within the 3D environment; Control the movement or posture of the movable platform 200; generate a control instruction for the movable platform 200 according to the user's operation on the anchor point, and the control instruction is used to control the movable platform 200 to stop moving or continue to move. For example, during the flight of the drone, if the user's operation on the anchor point is detected, the drone is controlled to hover. After the drone hovers, if the user's operation on the anchor point is detected, the drone is controlled. Drone flight.
  • the wearable device 100 may include eyewear devices, smart watches, smart bracelets, etc.
  • the movable platform 200 may include drones and pan-tilt vehicles.
  • the drones may include rotor-type drones, such as quad-rotor drones and six-rotor drones.
  • Rotary-wing UAVs, octo-rotor UAVs, fixed-wing UAVs, or a combination of rotary-wing and fixed-wing UAVs are not limited here.
  • FIG. 3 is a schematic flowchart of steps of a human-computer interaction method for a wearable device provided by an embodiment of the present application.
  • the human-computer interaction method of the wearable device includes steps S101 to S103.
  • Step S101 Display the 3D environment through the wearable device.
  • the wearable device can display the 3D environment in the form of virtual reality, augmented reality or mixed reality, which is not specifically limited in the embodiments of the present application.
  • the 3D environment displayed in the form of virtual reality does not include the real environment
  • the 3D environment displayed in the form of augmented reality includes the virtual environment and the real environment
  • the 3D environment displayed in the form of mixed reality includes the virtual environment and the real environment.
  • Step S102 Obtain the image captured by the image sensor provided on the wearable device, and identify the movement and/or operation of at least one body part of the user wearing the wearable device based on the image.
  • the current posture of at least one body part of the user wearing the wearable device is identified according to the image captured by the image sensor; when the current posture is the first preset posture, the at least one body part is identified through the image captured by the image sensor. Movement and/or manipulation of parts.
  • at least one body part may include a hand, an arm, etc.
  • the first preset posture may be a hand posture.
  • the first preset gesture is the gesture shown in FIG. 4 .
  • the visual indicator when the current posture of at least one body part of the user is the first preset posture, the visual indicator is displayed in the 3D environment.
  • the visual indicator is used to select a target object in the 3D environment, and the visual indicator includes a cursor.
  • a mapping object corresponding to the at least one body part and a control area identifying the visual indicator on the mapping object are displayed in the 3D environment .
  • the control area includes at least one of the inner forearm area, the outer forearm area, the palm area, and the back of the hand area of the mapping object corresponding to at least one body part.
  • the mapping object includes at least one body part or a virtual model corresponding to at least one body part in the image captured by the image sensor.
  • the image captured by the image sensor and containing at least one body part of the user is superimposed and displayed in the 3D environment, and then a mapping object corresponding to the at least one body part is displayed.
  • the user can control the movement of the visual indicator in the 3D environment by moving and/or operating his or her body parts, which can bring benefits to the user.
  • it is convenient for users to select target objects in the 3D environment, and it can also improve interactive immersion.
  • the displayed mapping object corresponding to at least one body part is the left hand in the image captured by the image sensor, and the palm area of the left hand is marked with a control area 11 with a visual indicator mark.
  • the control area of the visual indication mark can also be identified on the displayed back hand area of the left hand, the inner forearm area or the outer forearm area.
  • the displayed mapping object corresponding to at least one body part is a virtual model of the right hand, and the palm area of the virtual model of the right hand is marked with a control area 12 of visual indication marks.
  • the user can control the movement of the visual indicator within the 3D environment through contact with the palm area, movement and/or operation of the body part. It provides control feedback to users, making it easier for users to select target objects in the 3D environment, and also improves interactive immersion.
  • displaying the mapping object corresponding to at least one body part in the 3D environment may include: obtaining the mapping object corresponding to the current posture of the at least one body part; and displaying the mapping object in the 3D environment.
  • Different postures correspond to different mapping objects.
  • the position of the mapping object in the 3D environment can be fixed or determined based on the position of the body part relative to the wearable device.
  • Step S103 According to the movement and/or operation, the visual indicator is mapped and generated to move in the depth direction in the 3D environment.
  • the visual indicator is used to select the target object in the 3D environment.
  • the movement directions of visual indicators in the 3D environment include depth direction, horizontal direction and vertical direction.
  • the movement direction of the visual indicator in the 3D environment can be as shown in Figure 7.
  • the depth direction is the Z direction, including the +Z direction and -Z direction
  • the horizontal direction is the X direction, including the +X direction and -X direction
  • the vertical direction is the Y direction, including +Y direction and -Y direction.
  • the +Z direction can be the front of the visual indicator mark 21 in the 3D environment
  • the -Z direction can be the rear of the visual indicator mark 21 in the 3D environment
  • the +X direction can be the visual indicator mark 21 in the 3D environment. to the right of above.
  • the depth direction of the visual indicator in the 3D environment is mapped and generated based on the movement and/or operation of at least one body part of the user.
  • the preset plane is a plane composed of the horizontal direction and the vertical direction of the visual indicator in the 3D environment.
  • the preset plane is the XOY plane, that is, when the visual indicator 21 does not change in the XOY plane, the user can control the visual indicator 21 in the 3D environment by moving and/or operating at least one body part. Move in the depth direction, that is, the Z direction.
  • the position change information of at least one body part relative to the wearable device is determined; based on the position change information, a visual indicator is mapped and generated to move in the depth direction in the 3D environment .
  • the gesture of the user's hand is as shown in Figure 4, Figure 5 or Figure 6.
  • the gestures of the user's left hand are as shown in Figure 5
  • the gestures of the right hand are as shown in Figure 6.
  • the visual indicator mark is at The visual indicator moves forward in the 3D environment, and when the user moves his left and right hands simultaneously so that the distance between the left and right hands relative to the wearable device decreases, the visual indicator moves backward in the 3D environment.
  • the recognized operation of at least one body part includes an operation of the user's finger on at least one body part.
  • the depth direction of the visual indicator in the 3D environment is generated by mapping.
  • Performing the movement may include: generating a rotation operation of the mapping object corresponding to the user's finger in the control area based on the operation of the user's finger on at least one body part; obtaining a rotation trajectory corresponding to the rotation operation, where the shape of the rotation trajectory is When presetting the shape, control the movement of the visual indicator in the depth direction in the 3D environment.
  • the preset shapes can be circles, ovals, rectangles, triangles, etc.
  • the shape of the rotation trajectory 11 corresponding to the rotation operation of the mapping object corresponding to the user's finger in the control area of the visual indicator is an ellipse, and the rotation direction of the rotation trajectory is clockwise, then it can The visual indicator in the 3D environment is controlled to move forward. If the rotation direction of the rotation trajectory is counterclockwise, the visual indicator in the 3D environment can be controlled to move backward.
  • mapping and generating a visual indicator to move in a depth direction in a 3D environment based on the recognized movement and operation of at least one body part may include: generating a user map based on the user's finger operation on at least one body part.
  • the rotation operation of the mapping object corresponding to the finger in the control area obtain the rotation trajectory corresponding to the rotation operation, and determine the position change information of at least one body part relative to the wearable device based on the recognized movement of at least one body part;
  • the shape of the rotation trajectory is a preset shape, based on the position change information, the visual indicator is mapped and generated to move in the depth direction in the 3D environment.
  • the sliding operation of the user's finger on at least one body part is recognized through the image sensor, and the sliding operation of the mapping object corresponding to the finger in the control area is generated by mapping; according to the sliding operation of the mapping object corresponding to the finger in the control area , control the movement of visual indicators in the horizontal or vertical direction in the 3D environment.
  • the user can control the movement of the visual indicator in the horizontal or vertical direction in the 3D environment through the sliding operation of the finger on the body part, which is extremely convenient. Dadi improves the convenience and immersion of human-computer interaction.
  • the control area with a visual indicator mark is marked in the palm area of the mapping object corresponding to the user's left hand.
  • mapping When it is recognized that the user's right hand fingers slide to the left in the palm area of the left hand, mapping generates a mapping object corresponding to the right hand fingers in the user's palm area.
  • the left hand slides left in the palm area of the corresponding mapping object (the control area of the visual indicator), and the visual indicator moves horizontally to the left in the 3D environment.
  • the mapping object corresponding to the fingers of the right hand slides to the right in the palm area of the mapping object corresponding to the user's left hand (the control area of the visual indicator).
  • the visual indicator moves horizontally to the right in the 3D environment.
  • the mapping object corresponding to the right finger of the right hand slides upward in the palm area of the mapping object corresponding to the user's left hand (the control area of the visual indicator mark), and the visual indicator mark Move vertically upward in a 3D environment.
  • the mapping object corresponding to the fingers of the right hand is generated to slide down in the palm area of the mapping object corresponding to the user's left hand (the control area of the visual indication mark), and the visual indication is
  • the logo moves vertically downward in the 3D environment.
  • the user can also move the left hand to change the distance between the left hand and the wearable device to control the movement of the visual indicator in the depth direction (forward or backward) in the 3D environment.
  • mapping when a click operation of a finger of the user on other body parts is recognized through the image sensor, mapping generates a mapping object corresponding to the finger to click the control area; according to the generated mapping object corresponding to a finger A click operation on the control area determines the object corresponding to the current position of the visual indicator as the selected target object.
  • the mapping when the click operation of the user's multiple fingers on other body parts is recognized through the image sensor, the mapping generates the click operations of the mapping objects corresponding to the multiple fingers on the control area; according to the generated multiple finger corresponding The mapping object clicks on the control area to display the preset menu items in the 3D environment.
  • a mapping object corresponding to the at least one body part is displayed in the 3D environment; according to the current posture of the at least one body part, the mapping object is displayed A virtual input keyboard is identified on the mapping object corresponding to at least one body part; according to the user's operation on at least one body part, mapping generates the user's operation on the virtual input keyboard; according to the generated operation on the virtual input keyboard, a corresponding control instructions and execute the control instructions.
  • the second preset gesture may be a gesture in which the fingers of the left hand overlap the fingers of the right hand as shown in FIG. 9 .
  • a target area for identifying the virtual input keyboard is determined on the displayed mapping object corresponding to the at least one body part according to the recognized current posture of the at least one body part; at least one body part is identified in the target area The virtual input keyboard corresponding to the current posture.
  • different postures of at least one body part correspond to different virtual input keyboards.
  • the target area may include some or all key points of the mapping object corresponding to the user's hand, and the key points may include finger tips and/or knuckles of the mapping object corresponding to the finger.
  • the mapping relationship between each virtual input key in the corresponding virtual input keyboard and each key point of at least one body part is obtained; according to the mapping relationship, each virtual input key in the virtual input keyboard is displayed on on the corresponding key points to form a corresponding virtual input keyboard.
  • the wearable device stores the mapping relationship between each virtual input key and each key point in the different virtual input keyboard corresponding to different postures of the body parts, which can be used for Chinese character set, Korean character set, English character set, Special character sets, numeric character sets and other well-known character sets are used to establish the mapping relationship between virtual input keys and key points.
  • the nine knuckles of the mapping object corresponding to the user's left hand display virtual input buttons, thus forming a nine-grid virtual input method.
  • the user can map and generate the user through the operation of the finger knuckles. Operate virtual input keys to input information or switch virtual input keyboards.
  • buttons are displayed at nine key points of the mapping object corresponding to the user's left hand, and "DEF”, "ABC” and “ABC” are displayed at the tip of the index finger and the two knuckles near the finger tip respectively.
  • “@/.”, "MNO”, “JKL” and “GHI” are respectively displayed on the tip of the middle finger and the two knuckles near the finger tip, and "MNO”, “JKL” and “GHI” are displayed on the tip of the ring finger and the two knuckles near the finger tip respectively.
  • virtual input buttons are displayed on the 15 key points of the mapping object corresponding to the user's left hand.
  • the virtual input buttons displayed on the knuckles and tips of the index fingers include “@/.”, “ABC”, “DEF” and delete icon
  • the virtual input keys displayed on the knuckle and tip of the middle finger include “GHI”, “JKL”, “MNO” and line feed icons
  • the virtual input keys displayed on the knuckles and tips of the ring finger include “PQRS", “YUVW”, “XYZ” and "0”
  • the virtual input keys displayed on the knuckles and tips of the little fingers include keys for switching the numeric keypad. "123”, space bar And the button “Chinese/English” for switching between Chinese and English.
  • the mapping when the click operation of the user's finger on the knuckle corresponding to the button "123" of the little finger of the left hand is recognized, the mapping generates the user's click operation on the button "123” in response to the user's click operation on the button "123” , switches the displayed virtual input keyboard to a numeric keypad.
  • the 10 key points of the mapping object corresponding to the user's left hand display virtual input buttons.
  • the tip of the index finger and the two knuckles near the finger tip display "3", "2" and "1" respectively.
  • the tip of the middle finger and the two knuckles near the finger end are respectively displayed with “6”, “5” and “4”
  • the tip of the ring finger and the two knuckles near the finger tip are respectively displayed with “0”, “9”, “8” and "7”.
  • a mapping object corresponding to the at least one body part is displayed in the 3D environment;
  • the mapping object simultaneously identifies the control area of the virtual input keyboard and the visual indicator.
  • the first preset posture, the second preset posture and the third preset posture are different, and the third preset posture can be set as needed.
  • the third preset gesture is the gesture shown in Figure 14.
  • the palm area of the mapping object corresponding to the user's left hand is marked with a control area 31 of visual indication marks, and the fingers of the mapping object corresponding to the left hand are marked with a virtual input keyboard.
  • the wearable device displays a 3D environment and recognizes the movement and/or operation of at least one body part of the user wearing the wearable device through an image sensor provided on the wearable device.
  • the recognized movement and/or operation of at least one body part is mapped to generate a visual indicator to move in the depth direction in the displayed 3D environment, allowing the user to interact with the wearable device through the movement and/or operation of the body part.
  • Interaction can bring interactive feedback to users, greatly improving the convenience and interactive immersion of human-computer interaction on wearable devices.
  • FIG. 16 is a schematic flowchart of steps of another human-computer interaction method for a wearable device provided by an embodiment of the present application.
  • the human-computer interaction method of the wearable device includes steps S201 to S204.
  • Step S201 Display the 3D environment through the wearable device.
  • the wearable device can display the 3D environment in the form of virtual reality, augmented reality or mixed reality, which is not specifically limited in the embodiments of the present application.
  • the 3D environment displayed in the form of virtual reality does not include the real environment
  • the 3D environment displayed in the form of augmented reality includes the virtual environment and the real environment
  • the 3D environment displayed in the form of mixed reality includes the virtual environment and the real environment.
  • Step S202 Obtain the image captured by the image sensor provided on the wearable device, and identify the positioning point of at least one body part of the user wearing the wearable device based on the image.
  • the current posture of at least one body part of the user wearing the wearable device is identified; when the current posture of at least one body part is a preset posture, the image captured by the image sensor , identifies the anchor point of at least one body part.
  • the positioning points include finger joint points of the user's hand, and the preset posture can be set as needed.
  • the default posture is the gesture shown in Figure 17, that is, the hand is half-held.
  • the positioning point is the knuckle 41 or 42 of the index finger.
  • visual indicators are generated within the 3D environment based on the recognized positioning points.
  • a mapping object corresponding to at least one body part is displayed in a 3D environment and an anchor point is identified on the mapping object corresponding to at least one body part; a visual indicator is generated in the 3D environment according to the identified anchor point.
  • the visual indicator is generated based on the positioning point and the user's wrist joint point.
  • a wrist joint point is identified on a mapping object corresponding to at least one body part; a visual indicator is generated in the 3D environment based on the identified positioning point and wrist joint point. Specifically, the visual indicator is generated with the identified wrist joint point as the starting point of the visual indicator, and the generated visual indicator passes through the identified positioning point. Or the positioning point of the logo is used as the starting point of the visual indicator mark, and the reverse extension line of the visual indicator mark passes through the wrist joint point.
  • the mapping object 51 corresponding to the right hand is displayed in the 3D environment, and the second knuckle on the index finger of the mapping object 51 from the finger tip is identified as the anchor point 52, and the light beam 53 takes the anchor point 52 as the starting point. , the reverse extension line of the light beam 53 passes through the wrist joint point 54.
  • Step S203 Based on the movement of the anchor point, map and generate the movement of the visual indicator in the 3D environment.
  • the movement of the visual indicator in the 3D environment is mapped and generated based on the movement of the anchor point.
  • the visual indicator includes a cursor or a light beam, and the visual indicator is used to select a target object or menu option in the 3D environment.
  • the movement direction and/or movement distance of the anchor point are obtained; and based on the movement direction and/or movement distance of the anchor point, the movement of the visual indicator within the 3D environment is controlled.
  • the movement direction of the visual indicator mark in the 3D environment is the same as the movement direction of the anchor point.
  • the visual indicator sign moves forward in the 3D environment; when the anchor point moves backward, the visual indicator sign moves backward in the 3D environment; when the anchor point moves to the left, the visual indicator sign moves forward in the 3D environment.
  • the anchor point moves to the left
  • the anchor point moves to the right the visual indicator moves to the right in the 3D environment.
  • the anchor point moves upward, the visual indicator moves upward in the 3D environment.
  • the anchor point moves downward, the visual indicator moves downward in the 3D environment. move.
  • the wearable device when the wearable device communicates with the movable platform, the movement or posture of the movable platform is controlled according to the movement of the anchor point.
  • the visual indication mark when the wearable device communicates with the movable platform, the visual indication mark includes a first direction mark, a second direction mark and a third direction mark.
  • the first direction mark is used to represent the positive direction of the horizontal axis of the movable platform, that is, The moving direction of the movable platform
  • the second direction mark is used to indicate the positive direction of the vertical axis of the movable platform
  • the third direction mark is used to indicate the positive direction of the vertical axis of the movable platform.
  • the first direction mark 62, the second direction mark 63 and the third direction mark 64 all pass through the anchor point 61, and the first direction mark 62 indicates the positive direction of the X-axis, and the second direction mark 63 indicates the positive direction of the Y-axis. direction, the third direction mark 64 indicates the positive Z-axis direction.
  • the movable platform is controlled to accelerate.
  • the positioning point 61 moves in the negative direction of the The movable platform translates to the right.
  • the anchor point 61 moves in the negative direction of the Y-axis
  • the movable platform is controlled to translate to the left.
  • the movable platform When the anchor point 61 moves in the positive direction of the Z-axis, the movable platform is controlled to descend. The anchor point 61 moves in the negative direction of the Z-axis. When moving in the direction, the movable platform is controlled to rise. For another example, when the anchor point 61 rotates around the X-axis, the roll angle of the movable platform changes. When the anchor point 61 rotates around the Y-axis, the pitch angle of the movable platform changes. When the anchor point 61 rotates around the Z-axis, the pitch angle of the movable platform changes. The yaw angle of the mobile platform changes.
  • Step S204 Generate corresponding control instructions according to the user's operation on the anchor point.
  • the generated control instructions may include object selection instructions, direction selection instructions, confirmation instructions or control instructions of the movable platform.
  • the control instructions of the movable platform are used to control the movable platform to stop moving or continue to move.
  • the user's movement of the anchor point can realize the movement of the visual indicator mark, and the user can also generate control instructions by operating the same anchor point, which greatly improves the convenience of the user's interaction with the wearable device.
  • mapping object 51 when it is recognized that the user rotates his right hand, the displayed mapping object 51 also rotates accordingly, so that The direction pointed by the light beam 53 also changes, and when the click operation of the user's thumb on the knuckle corresponding to the anchor point 52 is recognized, mapping generates the thumb mapping object's click operation on the anchor point 52. According to the mapping of the thumb The object's click operation on the anchor point 52 generates a direction selection instruction based on the direction in which the light beam 53 is currently pointed.
  • mapping object 51 when it is recognized that the user rotates his right hand, the displayed mapping object 51 also rotates accordingly, so that The target object pointed by the beam 53 also changes, and when the click operation of the user's thumb on the knuckle corresponding to the anchor point 52 is recognized, mapping generates the thumb's mapping object's click operation on the anchor point 52. According to the thumb Based on the click operation of the mapping object on the anchor point 52, an object selection instruction is generated based on the target object currently pointed by the light beam 53.
  • the anchor point 72 on the displayed mapping object also moves accordingly. Move to change the menu option at the position of the cursor.
  • the mapping object 71 of the thumb When the click operation of the user's thumb on the knuckle corresponding to the anchor point 72 is recognized, the mapping object 71 of the thumb generates a click operation on the anchor point 72. According to the thumb The click operation of the thumb mapping object 71 on the anchor point 72 generates a confirmation instruction based on the menu option at the position of the cursor.
  • an object selection instruction is generated based on the user's operation on the anchor point, and the object selection instruction is used to select objects in the 3D environment. For example, as shown in Figure 21, when a click operation of the user's thumb on the knuckle corresponding to the anchor point 72 is recognized, mapping generates a click operation of the thumb mapping object 71 on the anchor point 72. According to the thumb mapping object 71 For the click operation of the anchor point 72, an object selection instruction is generated based on the target object currently pointed by the visual indicator. The wearable device generates an object selection instruction, and selects the corresponding target object in the 3D environment according to the object selection instruction.
  • a control instruction for the movable platform is generated based on the user's operation on the anchor point.
  • the control instruction is used to control the movable platform to stop moving or continue moving.
  • the movable platform is a drone, and the drone is in a hovering state.
  • the mapping generates the thumb's According to the click operation of the mapping object 71 on the anchor point 72 of the thumb, a control instruction for controlling the drone to continue flying is generated, and the control instruction is sent to the drone to Control the drone to change from a hovering state to a forward flying state.
  • the mapping If the drone flies forward, when the click operation of the user's thumb on the knuckle corresponding to the anchor point 72 is recognized, the mapping generates a mapping of the thumb.
  • the click operation of the object 71 on the anchor point 72 according to the thumb mapping object 71's click operation on the anchor point 72, generates a control instruction for controlling the hovering of the drone, and sends the control instruction to the drone to control The drone hovers.
  • the wearable device displays a 3D environment, and identifies the positioning point of at least one body part of the user wearing the wearable device through the image sensor provided on the wearable device. According to the identified positioning The movement of the point is mapped to generate a visual indicator of the movement in the 3D environment. According to the user's operation of the anchor point, corresponding control instructions are generated, allowing the user to interact with the wearable device through the movement or operation of the anchor point of the body part.
  • Computer interaction can bring interactive feedback to users, greatly improving the convenience and interactive immersion of human-computer interaction on wearable devices.
  • Figure 22 is a schematic structural block diagram of a wearable device provided by an embodiment of the present application.
  • the wearable device 300 includes a display device 310 , an image sensor 320 , a memory 330 and a processor 340 , which are connected by a bus 350 , such as It is the I2C (Inter-integrated Circuit) bus.
  • the display device 310 is used to display the 3D environment, and the image sensor 320 is used to capture images.
  • the processor 601 may be a micro-controller unit (Micro-controller Unit, MCU), a central processing unit (Central Processing Unit, CPU) or a digital signal processor (Digital Signal Processor, DSP), etc.
  • MCU Micro-controller Unit
  • CPU Central Processing Unit
  • DSP Digital Signal Processor
  • the memory 330 is used to store computer programs.
  • the memory 330 can be a Flash chip, a read-only memory (ROM, Read-Only Memory) disk, an optical disk, a U disk, a mobile hard disk, etc.
  • the processor 340 is used to run the computer program stored in the memory 330, and implement the following steps when executing the computer program:
  • the mapping Based on the movement and/or operation, the mapping generates a visual indicator indicating movement in the depth direction within the 3D environment.
  • the processor when the processor implements mapping to generate a visual indicator to move in the depth direction in the 3D environment according to the movement and/or operation, the processor is configured to implement:
  • mapping When the visual indicator does not change on the preset plane in the 3D environment, mapping generates movement of the visual indicator in the depth direction in the 3D environment according to the movement and/or operation.
  • the processor is configured to implement: when mapping and generating the visual indicator to move in the depth direction in the 3D environment based on the movement:
  • mapping generates movement of the visual indicator in the depth direction in the 3D environment.
  • the processor realizes, based on the image, the movement and/or operation of at least one body part of the user wearing the wearable device, including:
  • the movement and/or operation of the at least one body part is identified through the image captured by the image sensor.
  • processor is also used to implement the following steps:
  • the visual indicator is displayed in the 3D environment.
  • processor is also used to implement the following steps:
  • mapping object corresponding to the at least one body part is displayed in the 3D environment and a control area of the visual indicator is identified on the mapping object.
  • the mapping object includes at least one body part in the image captured by the image sensor or a virtual model corresponding to the at least one body part.
  • control area includes at least one of the inner forearm area, the outer forearm area, the palm area, and the back of the hand area of the mapping object.
  • processor is also used to implement the following steps:
  • the image sensor identifies the sliding operation of the user's finger on the at least one body part, and mapping generates the sliding operation of the mapping object corresponding to the finger in the control area;
  • the visual indicator is controlled to move in the horizontal direction or the vertical direction in the 3D environment.
  • processor is also used to implement the following steps:
  • mapping generates the user's operation on the virtual input keyboard
  • a corresponding control instruction is generated, and the control instruction is executed.
  • the processor when the processor identifies a virtual input keyboard on the displayed mapping object corresponding to the at least one body part according to the current posture, the processor is configured to:
  • a virtual input keyboard corresponding to the current gesture is identified in the target area.
  • the target area includes some or all key points of the mapping object corresponding to the user's hand.
  • the processor when identifying the virtual input keyboard corresponding to the current gesture in the target area, is configured to:
  • each virtual input key in the virtual input keyboard is displayed on a corresponding key point to form the virtual input keyboard.
  • processor is also used to implement the following steps:
  • a virtual input keyboard and a control area of the visual indicator are simultaneously identified on the mapping object corresponding to the at least one body part.
  • the visual indicator includes a cursor
  • the wearable device includes glasses.
  • the processor 340 is configured to run a computer program stored in the memory 330, and implement the following steps when executing the computer program:
  • mapping According to the movement of the anchor point, mapping generates visual indicators of movement within the 3D environment
  • the visual indicator includes a cursor or a light beam.
  • the processor when the processor recognizes the positioning point of at least one body part of the user wearing the wearable device based on the image, the processor is configured to:
  • the positioning point of the at least one body part is identified through the image captured by the image sensor.
  • processor is also used to implement the following steps:
  • the visual indicator is generated within the 3D environment.
  • the processor when generating the visual indicator in the 3D environment according to the positioning point, is configured to:
  • the visual indication mark is generated within the 3D environment according to the identified positioning point.
  • the processor when the processor generates the visual indication mark in the 3D environment according to the identified positioning point, it is configured to:
  • the visual indication mark is generated within the 3D environment according to the identified positioning point and the wrist joint point.
  • the positioning point includes a finger joint point of the user's hand.
  • the processor when the processor implements mapping to generate visual indicators based on the movement of the positioning point in the 3D environment, the processor is configured to implement:
  • processor is also used to implement the following steps:
  • the movement or posture of the movable platform is controlled according to the movement of the anchor point.
  • control instructions include object selection instructions, confirmation instructions or movable platform control instructions.
  • the processor when generating corresponding control instructions based on the user's operation on the anchor point, the processor is configured to:
  • a control instruction for the movable platform is generated according to the user's operation on the anchor point, and the control instruction is used to control the movable platform to stop moving. Or keep moving.
  • the processor when generating corresponding control instructions based on the user's operation on the anchor point, the processor is configured to:
  • an object selection instruction is generated according to the user's operation on the anchor point, and the object selection instruction is used to select an object in the 3D environment.
  • Embodiments of the present application also provide a storage medium.
  • the storage medium stores a computer program.
  • the computer program includes program instructions.
  • the processor executes the program instructions to implement the wearable device provided by the above embodiments. The steps of the computer interaction method.
  • the storage medium may be an internal storage unit of the wearable device described in any of the preceding embodiments, such as a hard disk or memory of the wearable device.
  • the storage medium may also be an external storage device of the wearable device, such as a plug-in hard drive, a smart memory card (Smart Media Card, SMC), or a secure digital (SD) equipped on the wearable device. Card, Flash Card, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

一种可穿戴设备的人机交互方法,包括:显示3D环境(S101);获取可穿戴设备上设置的图像传感器捕获到的图像,并根据图像,识别穿戴可穿戴设备的用户至少一个身体部位的移动和/或操作(S102);根据移动和/或操作,映射生成视觉指示标识在3D环境里的深度方向进行运动(S103)。该方法能够提高可穿戴设备的人机交互的便利性。

Description

可穿戴设备的人机交互方法、可穿戴设备及存储介质 技术领域
本申请涉及人机交互领域,尤其涉及一种可穿戴设备的人机交互方法、可穿戴设备及存储介质。
背景技术
利用视觉识别技术来识别用户的手势,以实现用户与可穿戴设备之间的人机交互,是可穿戴设备的关键技术和核心竞争力。目前,现有的手势交互可以实现用户对虚拟物体的操作,但在交互时缺乏触感反馈,削弱了互动沉浸感,用户体验不好。
发明内容
基于此,本申请实施例提供了一种可穿戴设备的人机交互方法、可穿戴设备及存储介质,旨在提高可穿戴设备的人机交互的便利性和互动沉浸感。
第一方面,本申请实施例提供了一种可穿戴设备的人机交互方法,包括:
通过所述可穿戴设备显示3D环境;
获取所述可穿戴设备上设置的图像传感器捕获到的图像,并根据所述图像,识别穿戴所述可穿戴设备的用户至少一个身体部位的移动和/或操作;
根据所述移动和/或操作,映射生成视觉指示标识在所述3D环境里的深度方向进行运动,所述视觉指示标识用于选择所述3D环境内的目标对象。
本申请实施例提供的人机交互方法,可穿戴设备显示3D环境,并通过可穿戴设备上设置的图像传感器识别穿戴可穿戴设备的用户至少一个身体部位的移动和/或操作,根据识别到的至少一个身体部位的移动和/或操作,映射生成视觉指示标识在显示的3D环境里的深度方向进行运动,使得用户能够通过身体部位之间的触碰、身体部位的移动和/或操作与可穿戴设备进行人机交互,可以给用户带来交互反馈,极大地提高了可穿戴设备的人机交互的便利性和互动沉浸感。
第二方面,本申请实施例还提供了一种可穿戴设备的人机交互方法,其特征在于,包括:
通过所述可穿戴设备显示3D环境;
获取所述可穿戴设备上设置的图像传感器捕获到的图像,并根据所述图像, 识别穿戴所述可穿戴设备的用户至少一个身体部位的定位点;
根据所述定位点的移动,映射生成视觉指示标识在所述3D环境内的运动;
根据所述用户对所述定位点的操作,生成相应的控制指令。
本申请实施例提供的人机交互方法,可穿戴设备显示3D环境,并通过可穿戴设备上设置的图像传感器识别穿戴可穿戴设备的用户至少一个身体部位的定位点,根据识别到定位点的移动,映射生成视觉指示标识在3D环境内的运动,根据用户对定位点的操作,生成相应的控制指令,使得用户能够通过身体部位的定位点的移动或操作来与可穿戴设备进行人机交互,可以给用户带来交互反馈,极大地提高了可穿戴设备的人机交互的便利性和互动沉浸感。
第三方面,本申请实施例还提供了一种可穿戴设备,所述可穿戴设备包括:显示装置、图像传感器、存储器和处理器;
所述显示装置用于显示3D环境;
所述图像传感器用于捕获图像;
所述存储器用于存储计算机程序;
所述处理器,用于执行所述计算机程序并在执行所述计算机程序时,实现以下步骤:
获取所述图像传感器捕获到的图像,并根据所述图像,识别穿戴所述可穿戴设备的用户至少一个身体部位的移动和/或操作;
根据所述移动和/或操作,映射生成视觉指示标识在所述3D环境里的深度方向进行运动。
第四方面,本申请实施例还提供了一种可穿戴设备,所述可穿戴设备包括:显示装置、图像传感器、存储器和处理器;
所述显示装置用于显示3D环境;
所述图像传感器用于捕获图像;
所述存储器用于存储计算机程序;
所述处理器,用于执行所述计算机程序并在执行所述计算机程序时,实现以下步骤:
获取所述图像传感器捕获到的图像,并根据所述图像,识别穿戴所述可穿戴设备的用户至少一个身体部位的定位点;
根据所述定位点的移动,映射生成视觉指示标识在所述3D环境内的运动;
根据所述用户对所述定位点的操作,生成相应的控制指令。
第五方面,本申请实施例还提供了一种存储介质,所述存储介质存储有计 算机程序,所述计算机程序被处理器执行时使所述处理器实现如上所述的可穿戴设备的人机交互方法。
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,并不能限制本申请。
附图说明
为了更清楚地说明本申请实施例技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是实施本申请实施例提供的可穿戴设备的人机交互方法的一场景示意图;
图2是实施本申请实施例提供的可穿戴设备的人机交互方法的另一场景示意图;
图3是本申请实施例提供的一种可穿戴设备的人机交互方法的步骤示意流程图;
图4是本申请实施例提供的一手势示意图;
图5是本申请实施例中在身体部位对应的映射对象上标识控制区域的一示意图;
图6是本申请实施例中在身体部位对应的映射对象上标识控制区域的另一示意图;
图7是本申请实施例中的视觉指示标识在3D环境内的运动方向示意图;
图8是本申请实施例中的转动轨迹的一示意图;
图9是本申请实施例提供的另一手势示意图;
图10是本申请实施例中的手部对应的映射对象上标识虚拟输入键盘的一示意图;
图11是本申请实施例中的手部对应的映射对象上标识虚拟输入键盘的另一示意图;
图12是本申请实施例中的手部对应的映射对象上标识虚拟输入键盘的另一示意图;
图13是本申请实施例中的手部对应的映射对象上标识虚拟输入键盘的另一示意图;
图14是本申请实施例提供的又一手势示意图;
图15是本申请实施例在手部上标识虚拟输入键盘和控制区域的一示意图;
图16是本申请实施例提供的另一种可穿戴设备的人机交互方法的步骤示意流程图;
图17是本申请实施例提供的又一手势示意图;
图18是本申请实施例中的手部对应的映射对象上标识定位点的一示意图;
图19是本申请实施例中的手部对应的映射对象上显示视觉指示标识的一示意图;
图20是本申请实施例中的手部对应的映射对象上显示视觉指示标识的另一示意图;
图21是本申请实施例中的用户操作定位点的一示意图;
图22是本申请实施例提供的一种可穿戴设备的结构示意性框图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
附图中所示的流程图仅是示例说明,不是必须包括所有的内容和操作/步骤,也不是必须按所描述的顺序执行。例如,有的操作/步骤还可以分解、组合或部分合并,因此实际执行的顺序有可能根据实际情况改变。
下面结合附图,对本申请的一些实施方式作详细说明。在不冲突的情况下,下述的实施例及实施例中的特征可以相互组合。
请参阅图1,图1是实施本申请实施例提供的可穿戴设备的人机交互方法的一场景示意图。
如图1所示,可穿戴设备100包括显示装置110和图像传感器120,其中,显示装置110用于显示3D环境,图像传感器120用于识别穿戴可穿戴设备100的用户的至少一个身体部位的移动和/或操作,或者图像传感器120用于识别穿戴可穿戴设备100的用户的至少一个身体部位的定位点,定位点为身体部位的关键点,例如,定位点为用户的手指的关节点。
示例性的,可穿戴设备100可以以虚拟现实(Virtual Reality、VR)、增强现实(Augmented Reality,AR)或混合现实(Mixed Reality,MR)的形式显示3D环 境,本申请实施例对此不做具体限定。其中,以虚拟现实的形式显示的3D环境不包括真实环境,以增强现实的形式显示的3D环境包括虚拟环境和真实环境,以混合现实的形式显示的3D环境包括虚拟环境和真实环境。
在一实施例中,可穿戴设备100通过图像传感器120识别穿戴可穿戴设备的用户至少一个身体部位的移动和/或操作;根据识别到的至少一个身体部位的移动和/或操作,映射生成视觉指示标识在3D环境里的深度方向进行运动,该视觉指示标识用于选择3D环境内的目标对象。其中,目标对象可以是虚拟环境中的对象,也可以是真实环境中的对象,该视觉指示标识可以包括光标。
在一实施例中,可穿戴设备100通过图像传感器120识别穿戴可穿戴设备的用户至少一个身体部位的定位点;根据识别到的定位点的移动,映射生成视觉指示标识在3D环境内的运动,该视觉指示标识用于选择3D环境内的目标对象;根据用户对定位点的操作,将视觉指示标识所在位置的对象确定为被选择的目标对象。
请参阅图2,图2是实施本申请实施例提供的可穿戴设备的人机交互方法的另一场景示意图。如图2所示,该场景包括可穿戴设备100和可移动平台200,可穿戴设备100与可移动平台200通信连接,可穿戴设备100用于显示可移动平台200传输的图像。
其中,可移动平台200包括平台本体210、设于平台本体210上的动力系统220和控制系统(图2中未示出),该动力系统220用于为平台本体210提供移动动力。动力系统220可以包括一个或多个螺旋桨221、与一个或多个螺旋桨相对应的一个或多个电机222、一个或多个电子调速器(简称为电调)。
其中,电机222连接在电子调速器与螺旋桨221之间,电机222和螺旋桨221设置在可移动平台200的平台本体210上;电子调速器用于接收控制系统产生的驱动信号,并根据驱动信号提供驱动电流给电机222,以控制电机222的转速。电机222用于驱动螺旋桨221旋转,从而为可移动平台200的移动提供动力,该动力使得可移动平台200能够实现一个或多个自由度的运动。
在某些实施例中,可移动平台200可以围绕一个或多个旋转轴旋转。例如,上述旋转轴可以包括横滚轴、偏航轴和俯仰轴。应理解,电机222可以是直流电机,也可以交流电机。另外,电机222可以是无刷电机,也可以是有刷电机。
其中,控制系统可以包括控制器和传感系统。传感系统用于测量可移动平台的姿态信息,即可移动平台200在空间的位置信息和状态信息,例如,三维位置、三维角度、三维速度、三维加速度和三维角速度等。传感系统例如可以 包括陀螺仪、超声传感器、电子罗盘、惯性测量单元(Inertial Measurement Unit,IMU)、视觉传感器、全球导航卫星系统和气压计等传感器中的至少一种。例如,全球导航卫星系统可以是全球定位系统(Global Positioning System,GPS)。控制器用于控制可移动平台200的移动,例如,可以根据传感系统测量的姿态信息控制可移动平台200的移动。应理解,控制器可以按照预先编好的程序指令对可移动平台200进行控制。
示例性的,可穿戴设备100包括显示装置110和图像传感器120,其中,显示装置110用于显示3D环境,图像传感器120用于识别穿戴可穿戴设备100的用户的至少一个身体部位的移动和/或操作,或者图像传感器120用于识别穿戴可穿戴设备100的用户的至少一个身体部位的定位点。
在一实施例中,可穿戴设备100通过图像传感器120识别穿戴可穿戴设备的用户至少一个身体部位的定位点;根据识别到的定位点的移动,映射生成视觉指示标识在3D环境内的运动以及控制可移动平台200的运动或姿态;根据用户对定位点的操作,生成可移动平台200的控制指令,该控制指令用于控制可移动平台200停止移动或继续移动。例如,在无人机飞行的过程时,若检测到用户对定位点的操作,则控制无人机悬停,而在无人机悬停后,若检测到用户对定位点的操作,则控制无人机飞行。
其中,可穿戴设备100可以包括眼镜设备、智能手表、智能手环等,可移动平台200包括无人机和云台车、无人机包括旋翼型无人机,例如四旋翼无人机、六旋翼无人机、八旋翼无人机,也可以是固定翼无人机,还可以是旋翼型与固定翼无人机的组合,在此不作限定。
以下,将结合图1或图2中的场景对本申请的实施例提供的可穿戴设备的人机交互方法进行详细介绍。需知,图1或图2中的场景仅用于解释本申请实施例提供的可穿戴设备的人机交互方法,但并不构成对本申请实施例提供的可穿戴设备的人机交互方法应用场景的限定。
请参阅图3,图3是本申请实施例提供的一种可穿戴设备的人机交互方法的步骤示意流程图。
如图3所示,该可穿戴设备的人机交互方法包括步骤S101至步骤S103。
步骤S101、通过可穿戴设备显示3D环境。
其中,可穿戴设备可以以虚拟现实、增强现实或混合现实的形式显示3D环境,本申请实施例对此不做具体限定。其中,以虚拟现实的形式显示的3D环境不包括真实环境,以增强现实的形式显示的3D环境包括虚拟环境和真实 环境,以混合现实的形式显示的3D环境包括虚拟环境和真实环境。
步骤S102、获取可穿戴设备上设置的图像传感器捕获到的图像,并根据图像,识别穿戴可穿戴设备的用户至少一个身体部位的移动和/或操作。
示例性的,根据图像传感器捕获到的图像,识别穿戴可穿戴设备的用户至少一个身体部位的当前姿态;在当前姿态为第一预设姿态时,通过图像传感器捕获到的图像,识别至少一个身体部位的移动和/或操作。其中,至少一个身体部位可以包括手部、手臂等,第一预设姿态可以为手部的姿态。例如,第一预设姿态为图4所示的手势。通过在识别到用户的至少一个身体部位的当前姿态为设定的姿态时,才开始识别身体部位的移动和/或操作,可以避免误识别,还可以减少计算量。
在一实施例中,在用户的至少一个身体部位的当前姿态为第一预设姿态时,在3D环境内显示视觉指示标识。其中,该视觉指示标识用于选择3D环境内的目标对象,视觉指示标识包括光标。
在一实施例中,在用户的至少一个身体部位的当前姿态为第一预设姿态时,在3D环境内显示至少一个身体部位对应的映射对象以及在该映射对象上标识视觉指示标识的控制区域。其中,该控制区域包括至少一个身体部位对应的映射对象的小臂内侧区域、小臂外侧区域、掌心区域、手背区域中的至少一项。通过在3D环境内显示至少一个身体部位对应的映射对象以及在该映射对象上视觉指示标识的控制区域,使得用户能够通过该控制区域来控制视觉指示标识在3D环境内的运动,便于用户选择3D环境内的目标对象。
示例性的,该映射对象包括图像传感器捕获到的图像中的至少一个身体部位或至少一个身体部位对应的虚拟模型。例如,将图像传感器捕获到的包含用户的至少一个身体部位的图像叠加显示在3D环境内,进而显示至少一个身体部位对应的映射对象。通过在显示的身体部位对应的映射对象上标识视觉指示标识的控制区域,使得用户能够通过对自己的身体部位的移动和/或操作来控制视觉指示标识在3D环境内的运动,可以给用户带来控制反馈,便于用户选择3D环境内的目标对象,也可以提高互动沉浸感。
例如,如图5所示,显示的至少一个身体部位对应的映射对象为图像传感器捕获到的图像中的左手,且左手的掌心区域标识有视觉指示标识的控制区域11。当然,也可以在显示的左手的手背区域、小臂内侧区域或小臂外侧区域标识视觉指示标识的控制区域。又例如,如图6所示,显示的至少一个身体部位对应的映射对象为右手的虚拟模型,且右手的虚拟模型的掌心区域标识有视觉 指示标识的控制区域12。通过在显示的身体部位的虚拟模型上标识视觉指示标识的控制区域,使得用户能够通过与掌心区域的触碰、身体部位的移动和/或操作来控制视觉指示标识在3D环境内的运动,可以给用户带来控制反馈,便于用户选择3D环境内的目标对象,也可以提高互动沉浸感。
在一实施例中,在3D环境内显示至少一个身体部位对应的映射对象可以包括:获取至少一个身体部位的当前姿态对应的映射对象;在3D环境内显示该映射对象。其中,不同姿态对应不同的映射对象,映射对象在3D环境内的位置可以是固定不变的,也可以根据身体部位相对于可穿戴设备的位置来确定。
步骤S103、根据移动和/或操作,映射生成视觉指示标识在3D环境里的深度方向进行运动,视觉指示标识用于选择3D环境内的目标对象。
其中,视觉指示标识在3D环境里的运动方向包括深度方向、水平方向和竖直方向。视觉指示标识在3D环境里的运动方向可以如图7所述,深度方向为Z方向,包括+Z方向和-Z方向,水平方向为X方向,包括+X方向和-X方向,竖直方向为Y方向,包括+Y方向和-Y方向。可以理解的是,+Z方向可以为视觉指示标识21在3D环境里的前方,-Z方向可以为视觉指示标识21在3D环境里的后方,+X方向可以为视觉指示标识21在3D环境里的右方,-X方向可以为视觉指示标识21在3D环境里的左方,+Y方向可以为视觉指示标识21在3D环境里的下方,-Y方向可以为视觉指示标识21在3D环境里的上方。
在一实施例中,当视觉指示标识在3D环境里的预设平面内不发生变化时,根据用户的至少一个身体部位的移动和/或操作,映射生成视觉指示标识在3D环境里的深度方向进行运动。其中,预设平面为视觉指示标识在3D环境里的水平方向和竖直方向构成的平面。如图7所示,预设平面为XOY平面,也即视觉指示标识21在XOY平面内不发生变化时,用户通过至少一个身体部位的移动和/或操作,可以控制视觉指示标识21在3D环境里的深度方向,即Z方向进行运动。
在一实施例中,根据用户的至少一个身体部位的移动,确定至少一个身体部位相对可穿戴设备的位置变化信息;根据该位置变化信息,映射生成视觉指示标识在3D环境里的深度方向进行运动。例如,用户的手部的手势如图4、图5或图6所示,在用户移动手部导致手部相对于可穿戴设备之间的距离增加时,视觉指示标识在3D环境里向前运动,而在用户移动手部导致手部相对于可穿戴设备之间的距离减少时,视觉指示标识在3D环境里向后运动。
例如,用户的左手的手势如图5所示,右手的手势如图6所示,在用户同 时移动左手和右手,使得左手和右手相对于可穿戴设备之间的距离增加时,视觉指示标识在3D环境里向前运动,而在用户同时移动左手和右手,使得左手和右手相对于可穿戴设备之间的距离减少时,视觉指示标识在3D环境里向后运动。
在一实施例中,识别到的至少一个身体部位的操作包括用户的手指对至少一个身体部位的操作,根据识别到的至少一个身体部位的操作,映射生成视觉指示标识在3D环境里的深度方向进行运动可以包括:根据用户的手指对至少一个身体部位的操作,生成用户的手指对应的映射对象在该控制区域内的转动操作;获取该转动操作对应的转动轨迹,在该转动轨迹的形状为预设形状时,控制视觉指示标识在3D环境里的深度方向进行运动。其中,预设形状可以为圆形、椭圆形、矩形、三角形等。
例如,如图8所示,用户的手指对应的映射对象在视觉指示标识的控制区域内的转动操作对应的转动轨迹11的形状为椭圆形,且转动轨迹的转动方向为顺时针方向,则可以控制3D环境中的视觉指示标识向前运动,如果转动轨迹的转动方向为逆时针方向,则可以控制3D环境中的视觉指示标识向后运动。
在一实施例中,根据识别到的至少一个身体部位的移动和操作,映射生成视觉指示标识在3D环境里的深度方向进行运动可以包括:根据用户的手指对至少一个身体部位的操作,生成用户的手指对应的映射对象在该控制区域内的转动操作;获取该转动操作对应的转动轨迹,并根据识别到的至少一个身体部位的移动,确定至少一个身体部位相对可穿戴设备的位置变化信息;在该转动轨迹的形状为预设形状时,根据位置变化信息,映射生成视觉指示标识在3D环境里的深度方向进行运动。
在一实施例中,通过图像传感器识别用户的手指对至少一个身体部位的滑动操作,映射生成手指对应的映射对象在控制区域内的滑动操作;根据手指对应的映射对象在控制区域内的滑动操作,控制视觉指示标识在3D环境里的水平方向或竖直方向进行运动。通过在用户的身体部位对应的映射对象上标识视觉指示标识的控制区域,使得用户能够通过手指对身体部位的滑动操作来控制视觉指示标识在3D环境里的水平方向或竖直方向进行运动,极大地提高了人机交互的便利性和互动沉浸感。
例如,用户的左手对应的映射对象的掌心区域内标识有视觉指示标识的控制区域,则当识别到用户的右手手指在左手的掌心区域向左滑动时,映射生成 右手手指对应的映射对象在用户的左手对应的映射对象的掌心区域(视觉指示标识的控制区域)内向左滑动,视觉指示标识在3D环境里水平向左运动。当识别到用户的右手手指在左手的掌心区域向右滑动时,映射生成右手手指对应的映射对象在用户的左手对应的映射对象的掌心区域(视觉指示标识的控制区域)内向右滑动,此时视觉指示标识在3D环境里水平向右运动。
当识别到用户的右手手指在左手的掌心区域向上滑动时,映射生成右手手指对应的映射对象在用户的左手对应的映射对象的掌心区域(视觉指示标识的控制区域)内向上滑动,视觉指示标识在3D环境里竖直向上运动。当识别到用户的右手手指在左手的掌心区域向下滑动时,映射生成右手手指对应的映射对象在用户的左手对应的映射对象的掌心区域(视觉指示标识的控制区域)内向下滑动,视觉指示标识在3D环境里竖直向下运动。用户也可以通过移动左手,改变左手相对于可穿戴设备之间的距离,从而控制视觉指示标识在3D环境里的深度方向(向前或向后)进行运动。
在一实施例中,在通过图像传感器识别到用户的一个手指对其他身体部位的点击操作时,映射生成一个手指对应的映射对象对该控制区域的点击操作;根据生成的一个手指对应的映射对象对该控制区域的点击操作,将视觉指示标识当前所处位置对应的对象确定为被选择的目标对象。
在一实施例中,在通过图像传感器识别到用户的多个手指对其他身体部位的点击操作时,映射生成多个手指对应的映射对象对该控制区域的点击操作;根据生成的多个手指对应的映射对象对该控制区域的点击操作,在3D环境内显示预设菜单项。
在一实施例中,在识别到的至少一个身体部位的当前姿态为第二预设姿态时,在3D环境内显示至少一个身体部位对应的映射对象;根据至少一个身体部位的当前姿态,在显示的至少一个身体部位对应的映射对象上标识虚拟输入键盘;根据用户对至少一个身体部位的操作,映射生成用户对该虚拟输入键盘的操作;根据生成的所述虚拟输入键盘的操作,生成相应的控制指令,并执行控制指令。其中,第二预设姿态可以为如图9所示的左手的手指交叠在右手的手指上面的手势。通过在显示的身体部位对应的映射对象上标识虚拟输入键盘,便于用户与可穿戴设备进行人机交互,同时在交互时可以为用户提供反馈,从而提高了人机交互的便利性和互动沉浸感。
在一实施例中,根据识别到的至少一个身体部位的当前姿态,在显示的至 少一个身体部位对应的映射对象上确定用于标识虚拟输入键盘的目标区域;在目标区域内标识至少一个身体部位的当前姿态对应的虚拟输入键盘。其中,至少一个身体部位的不同姿态对应不同的虚拟输入键盘。目标区域可以包括用户的手部对应的映射对象的部分关键点或全部关键点,该关键点可以包括手指对应的映射对象的指端和/或指关节。
示例性的,获取至少一个身体部位的对应的虚拟输入键盘中的每个虚拟输入按键与各关键点之间的映射关系;根据该映射关系,将虚拟输入键盘中的每个虚拟输入按键显示在对应的关键点上,以形成对应的虚拟输入键盘。其中,可穿戴设备中存储有身体部位的不同姿态对应不同的虚拟输入键盘中的每个虚拟输入按键与各关键点之间的映射关系,可以针对中文字符集、韩语字符集、英语字符集、特殊字符集、数字字符集及其它公知字符集来建立虚拟输入按键与关键点之间的映射关系。
例如,如图10所示,用户的左手对应的映射对象的9个指关节显示有虚拟输入按键,从而形成九宫格的虚拟输入法,这9个指关节包括食指的指端、食指上的靠近指端的两个指关节、中指的指端、中指上的靠近指端的两个指关节、无名指的指端、无名指上的靠近指端的两个指关节,用户可以通过手指指关节的操作来映射生成用户对虚拟输入按键的操作,从而实现信息输入或者切换虚拟输入键盘。
例如,如图11所示,用户的左手对应的映射对象的9个关键点显示有虚拟输入按键,食指的指端以及靠近指端的两个指关节处分别显示有“DEF”、“ABC”和“@/.”,中指的指端以及靠近指端的两个指关节处分别显示有“MNO”、“JKL”和“GHI”,无名指的指端以及靠近指端的两个指关节处分别显示有“XYZ”、“YUVW”和“PQRS”。
例如,如图12所示,用户的左手对应的映射对象的15个关键点显示有虚拟输入按键,食指的指关节和指端显示的虚拟输入按键包括“@/.”、“ABC”、“DEF”和删除图标
Figure PCTCN2022082674-appb-000001
中指的指关节和指端显示的虚拟输入按键包括“GHI”、“JKL”、“MNO”和换行图标
Figure PCTCN2022082674-appb-000002
无名指的指关节和指端显示的虚拟输入按键包括“PQRS”、“YUVW”、“XYZ”和“0”,小指的指关节和指端显示的虚拟输入按键包括用于切换数字小键盘的按键“123”、空格键
Figure PCTCN2022082674-appb-000003
和用于切换中英文的按键“中/英”。
示例性的,当识别到用户的手指对左手的小指的按键“123”对应的指关节的 点击操作时,映射生成用户对按键“123”的点击操作,响应用户对按键“123”的点击操作,将显示的虚拟输入键盘切换为数字小键盘。如图13所示,用户的左手对应的映射对象的10个关键点显示有虚拟输入按键,食指的指端以及靠近指端的两个指关节处分别显示有“3”、“2”和“1”,中指的指端以及靠近指端的两个指关节处分别显示有“6”、“5”和“4”,无名指的指端以及靠近指端的两个指关节处分别显示有“0”、“9”、“8”和“7”。
在一实施例中,在识别到至少一个身体部位的姿态由第二预设姿态变化为第三预设姿态时,在3D环境内显示至少一个身体部位对应的映射对象;在至少一个身体部位对应的映射对象上同时标识虚拟输入键盘和视觉指示标识的控制区域。其中,第一预设姿态、第二预设姿态与第三预设姿态不同,第三预设姿态可以按需进行设置。例如,第三预设姿态为如图14所示的手势。例如,如图15所示,用户的左手对应的映射对象的掌心区域标识有视觉指示标识的控制区域31,左手对应的映射对象的手指上标识有虚拟输入键盘。
上述实施例提供的可穿戴设备的人机交互方法,可穿戴设备显示3D环境,并通过可穿戴设备上设置的图像传感器识别穿戴可穿戴设备的用户至少一个身体部位的移动和/或操作,根据识别到的至少一个身体部位的移动和/或操作,映射生成视觉指示标识在显示的3D环境里的深度方向进行运动,使得用户能够通过身体部位的移动和/或操作与可穿戴设备进行人机交互,可以给用户带来交互反馈,极大地提高了可穿戴设备的人机交互的便利性和互动沉浸感。
请参阅图16,图16是本申请实施例提供的另一种可穿戴设备的人机交互方法的步骤示意流程图。
如图16所示,该可穿戴设备的人机交互方法包括步骤S201至S204。
步骤S201、通过可穿戴设备显示3D环境。
其中,可穿戴设备可以以虚拟现实、增强现实或混合现实的形式显示3D环境,本申请实施例对此不做具体限定。其中,以虚拟现实的形式显示的3D环境不包括真实环境,以增强现实的形式显示的3D环境包括虚拟环境和真实环境,以混合现实的形式显示的3D环境包括虚拟环境和真实环境。
步骤S202、获取可穿戴设备上设置的图像传感器捕获到的图像,并根据图像,识别穿戴可穿戴设备的用户至少一个身体部位的定位点。
示例性的,根据图像传感器捕获到的图像,识别穿戴所述可穿戴设备的用户至少一个身体部位的当前姿态;在至少一个身体部位的当前姿态为预设姿态时,通过图像传感器捕获到的图像,识别至少一个身体部位的定位点。通过在 识别到用户的至少一个身体部位的当前姿态为设定的姿态时,才开始识别身体部位的定位点,可以避免误识别,还可以减少计算量。
其中,定位点包括用户的手部的手指关节点,预设姿态可以按照需要进行设置。例如,预设姿态为如图17所示的手势,即手半握。如图18所示,定位点为食指的指关节41或指关节42。
在一实施例中,根据识别到的定位点,在3D环境内生成视觉指示标识。示例性的,在3D环境内显示至少一个身体部位对应的映射对象以及在至少一个身体部位对应的映射对象上标识定位点;根据标识的定位点,在3D环境内生成视觉指示标识。其中,视觉指示标识是根据定位点和用户的腕部关节点生成的。
示例性的,在至少一个身体部位对应的映射对象上标识腕部关节点;根据标识的定位点和腕部关节点,在3D环境内生成视觉指示标识。具体的,以标识的腕部关节点为视觉指示标识的起点生成视觉指示标识,且生成的视觉指示标识经过标识的定位点。或者以标识的定位点为视觉指示标识的起点,且视觉指示标识的反向延长线经过腕部关节点。例如,如图19所示,3D环境内显示右手对应的映射对象51,且映射对象51的食指上距离指端的第二个指关节被标识为定位点52,且光束53以定位点52为起点,光束53的反向延长线经过腕部关节点54。
步骤S203、根据定位点的移动,映射生成视觉指示标识在3D环境内的运动。
在一实施例中,在可穿戴设备与可移动平台未通信连接时,根据定位点的移动,映射生成视觉指示标识在3D环境内的运动。其中,视觉指示标识包括光标或光束,视觉指示标识用于选择3D环境中的目标对象或者菜单选项。
示例性的,获取定位点的移动方向和/或移动距离;根据定位点的移动方向和/或移动距离,控制视觉指示标识在3D环境内的运动。其中,视觉指示标识在3D环境内的运动方向与定位点的移动方向相同。例如,定位点向前移动时,视觉指示标识在3D环境向前移动,定位点向后移动时,视觉指示标识在3D环境向后移动,定位点向左移动时,视觉指示标识在3D环境向左移动,定位点向右移动时,视觉指示标识在3D环境向右移动,定位点向上移动时,视觉指示标识在3D环境向上移动,定位点向下移动时,视觉指示标识在3D环境向下移动。
在一实施例中,在可穿戴设备与可移动平台通信连接时,根据定位点的移 动,控制可移动平台的运动或姿态。其中,在可穿戴设备与可移动平台通信连接时,视觉指示标识包括第一方向标识、第二方向标识和第三方向标识,第一方向标识用于表示可移动平台的横轴正方向,即可移动平台的移动方向,第二方向标识用于表示可移动平台的纵轴正方向,第三方向标识用于表示可移动平台的竖轴正方向。
如图20所示,第一方向标识62、第二方向标识63和第三方向标识64均经过定位点61,且第一方向标识62指示X轴正方向,第二方向标识63指示Y轴正方向,第三方向标识64指示Z轴正方向。例如,定位点61沿X轴正方向移动时,控制可移动平台加速移动,定位点61沿X轴负方向移动时,控制可移动平台减速移动,定位点61沿Y轴正方向移动时,控制可移动平台向右平移,定位点61沿Y轴负方向移动时,控制可移动平台向左平移,定位点61沿Z轴正方向移动时,控制可移动平台下降,定位点61沿Z轴负方向移动时,控制可移动平台上升。又例如,定位点61围绕X轴转动时,可移动平台的横滚角发生变化,定位点61围绕Y轴转动时,可移动平台的俯仰角发生变化,定位点61围绕Z轴转动时,可移动平台的偏航角发生变化。
步骤S204、根据用户对定位点的操作,生成相应的控制指令。
其中,生成的控制指令可以包括对象选择指令、方向选择指令、确认指令或可移动平台的控制指令,可移动平台的控制指令用于控制可移动平台停止移动或继续移动。用户对定位点的移动可以实现视觉指示标识的运动,且用户也可以通过对同一定位点的操作实现控制指令的生成,这样极大地提高了用户与可穿戴设备进行交互的便利性。
例如,在视觉指示标识为光束,且光束用于选择3D环境内的射箭方向的场景下,如图19所示,当识别到用户转动右手时,显示的映射对象51也随之转动,从而使得光束53指向的方向也发生改变,而当识别到用户的大拇指对定位点52对应的指关节的点击操作时,映射生成大拇指的映射对象对定位点52的点击操作,根据大拇指的映射对象对定位点52的点击操作,基于光束53当前指向的方向生成方向选择指令。
又例如,在视觉指示标识为光束,且光束用于选择3D环境内的目标对象的场景下,如图19所示,当识别到用户转动右手时,显示的映射对象51也随之转动,从而使得光束53指向的目标对象也发生改变,而当识别到用户的大拇指对定位点52对应的指关节的点击操作时,映射生成大拇指的映射对象对定位点52的点击操作,根据大拇指的映射对象对定位点52的点击操作,基于光束 53当前指向的目标对象生成对象选择指令。
例如,在视觉指示标识为光标,且光标用于选择3D环境内的菜单选项的场景下,如图21所示,当识别到用户移动右手时,显示的映射对象上的定位点72也随之移动,从而改变光标所处位置的菜单选项,当识别到用户的大拇指对定位点72对应的指关节的点击操作时,映射生成大拇指的映射对象71对定位点72的点击操作,根据大拇指的映射对象71对定位点72的点击操作,基于光标所处位置的菜单选项生成确认指令。
在一实施例中,在可穿戴设备与可移动平台未通信连接时,根据用户对定位点的操作,生成对象选择指令,该对象选择指令用于选择3D环境中的对象。例如,如图21所示,当识别到用户的大拇指对定位点72对应的指关节的点击操作时,映射生成大拇指的映射对象71对定位点72的点击操作,根据大拇指的映射对象71对定位点72的点击操作,基于视觉指示标识当前指向的目标对象生成对象选择指令,可穿戴设备生成对象选择指令,并根据该对象选择指令在3D环境内选择对应的目标对象。
在一实施例中,在可穿戴设备与可移动平台通信连接时,根据用户对定位点的操作,生成可移动平台的控制指令,控制指令用于控制可移动平台停止移动或继续移动。例如,可移动平台为无人机,且无人机处于悬停状态,如图21所示,当识别到用户的大拇指对定位点72对应的指关节的点击操作时,映射生成大拇指的映射对象71对定位点72的点击操作,根据大拇指的映射对象71对定位点72的点击操作,生成用于控制无人机继续飞行的控制指令,将该控制指令发送给无人机,以控制无人机由悬停状态变化为向前飞行的状态,如果无人机向前飞行,当识别到用户的大拇指对定位点72对应的指关节的点击操作时,映射生成大拇指的映射对象71对定位点72的点击操作,根据大拇指的映射对象71对定位点72的点击操作,生成用于控制无人机悬停的控制指令,将该控制指令发送给无人机,以控制无人机悬停。
上述实施例提供的可穿戴设备的人机交互方法,可穿戴设备显示3D环境,并通过可穿戴设备上设置的图像传感器识别穿戴可穿戴设备的用户至少一个身体部位的定位点,根据识别到定位点的移动,映射生成视觉指示标识在3D环境内的运动,根据用户对定位点的操作,生成相应的控制指令,使得用户能够通过身体部位的定位点的移动或操作来与可穿戴设备进行人机交互,可以给用户带来交互反馈,极大地提高了可穿戴设备的人机交互的便利性和互动沉浸感。
请参阅图22,图22是本申请实施例提供的一种可穿戴设备的结构示意性 框图。
如图22所示,可穿戴设备300包括显示装置310、图像传感器320、存储器330和处理器340,显示装置310、图像传感器320、存储器330和处理器340,通过总线350连接,该总线350比如为I2C(Inter-integrated Circuit)总线。显示装置310用于显示3D环境,图像传感器320用于捕获图像。
具体地,处理器601可以是微控制单元(Micro-controller Unit,MCU)、中央处理单元(Central Processing Unit,CPU)或数字信号处理器(Digital Signal Processor,DSP)等。
具体地,存储器330用于存储计算机程序,存储器330可以是Flash芯片、只读存储器(ROM,Read-Only Memory)磁盘、光盘、U盘或移动硬盘等。
其中,所述处理器340用于运行存储在存储器330中的计算机程序,并在执行所述计算机程序时实现以下步骤:
获取所述图像传感器捕获到的图像,并根据所述图像,识别穿戴所述可穿戴设备的用户至少一个身体部位的移动和/或操作;
根据所述移动和/或操作,映射生成视觉指示标识在所述3D环境里的深度方向进行运动。
可选的,所述处理器在实现根据所述移动和/或操作,映射生成视觉指示标识在所述3D环境里的深度方向进行运动时,用于实现:
当所述视觉指示标识在所述3D环境里的预设平面不发生变化时,根据所述移动和/或操作,映射生成所述视觉指示标识在所述3D环境里的深度方向进行运动。
可选的,所述处理器在实现根据所述移动,映射生成所述视觉指示标识在所述3D环境里的深度方向进行运动时,用于实现:
根据所述移动,确定所述至少一个身体部位相对所述可穿戴设备的位置变化信息;
根据所述位置变化信息,映射生成所述视觉指示标识在所述3D环境里的深度方向进行运动。
可选的,所述处理器在实现根据所述图像,识别穿戴所述可穿戴设备的用户至少一个身体部位的移动和/或操作,包括:
根据所述图像,识别穿戴所述可穿戴设备的用户至少一个身体部位的当前姿态;
在所述当前姿态为第一预设姿态时,通过所述图像传感器捕获到的图像, 识别所述至少一个身体部位的移动和/或操作。
可选的,所述处理器还用于实现以下步骤:
在所述当前姿态为第一预设姿态时,在所述3D环境内显示所述视觉指示标识。
可选的,所述处理器还用于实现以下步骤:
在所述当前姿态为第一预设姿态时,在所述3D环境内显示所述至少一个身体部位对应的映射对象以及在所述映射对象上标识所述视觉指示标识的控制区域。
可选的,所述映射对象包括所述图像传感器捕获到的图像中的至少一个身体部位或所述至少一个身体部位对应的虚拟模型。
可选的,所述控制区域包括所述映射对象的小臂内侧区域、小臂外侧区域、掌心区域、手背区域中的至少一项。
可选的,所述处理器还用于实现以下步骤:
通过所述图像传感器识别所述用户的手指对所述至少一个身体部位的滑动操作,映射生成所述手指对应的映射对象在所述控制区域内的滑动操作;
根据所述手指对应的映射对象在所述控制区域内的滑动操作,控制所述视觉指示标识在所述3D环境里的水平方向或竖直方向进行运动。
可选的,所述处理器还用于实现以下步骤:
在所述当前姿态为第二预设姿态时,在所述3D环境内显示所述至少一个身体部位对应的映射对象;
根据所述当前姿态,在显示的所述至少一个身体部位对应的映射对象上标识虚拟输入键盘;
根据所述用户对所述至少一个身体部位的操作,映射生成用户对所述虚拟输入键盘的操作;
根据生成的用户对所述虚拟输入键盘的操作,生成相应的控制指令,并执行所述控制指令。
可选的,所述处理器在实现根据所述当前姿态,在显示的所述至少一个身体部位对应的映射对象上标识虚拟输入键盘时,用于实现:
根据所述当前姿态,在显示的所述至少一个身体部位对应的映射对象上确定用于标识所述虚拟输入键盘的目标区域;
在所述目标区域内标识所述当前姿态对应的虚拟输入键盘。
可选的,所述目标区域包括用户的手部对应的映射对象的部分关键点或全 部关键点。
可选的,所述处理器在实现在所述目标区域内标识所述当前姿态对应的虚拟输入键盘时,用于实现:
获取所述当前姿态对应的虚拟输入键盘中的每个虚拟输入按键与各所述关键点之间的映射关系;
根据所述映射关系,将所述虚拟输入键盘中的每个虚拟输入按键显示在对应的关键点上,以形成所述虚拟输入键盘。
可选的,所述处理器还用于实现以下步骤:
在识别到所述至少一个身体部位的姿态由第二预设姿态变化为第三预设姿态时,在所述3D环境内显示所述至少一个身体部位对应的映射对象;
在所述至少一个身体部位对应的映射对象上同时标识虚拟输入键盘和所述视觉指示标识的控制区域。
可选的,所述视觉指示标识包括光标,所述可穿戴设备包括眼镜设备。
在一实施例中,所述处理器340用于运行存储在存储器330中的计算机程序,并在执行所述计算机程序时实现以下步骤:
获取所述图像传感器捕获到的图像,并根据所述图像,识别穿戴所述可穿戴设备的用户至少一个身体部位的定位点;
根据所述定位点的移动,映射生成视觉指示标识在所述3D环境内的运动;
根据所述用户对所述定位点的操作,生成相应的控制指令。
可选的,所述视觉指示标识包括光标或光束。
可选的,所述处理器在实现根据所述图像,识别穿戴所述可穿戴设备的用户至少一个身体部位的定位点时,用于实现:
根据所述图像,识别穿戴所述可穿戴设备的用户至少一个身体部位的当前姿态;
在所述当前姿态为预设姿态时,通过所述图像传感器捕获到的图像,识别所述至少一个身体部位的定位点。
可选的,所述处理器还用于实现以下步骤:
根据所述定位点,在所述3D环境内生成所述视觉指示标识。
可选的,所述处理器在实现根据所述定位点,在所述3D环境内生成所述视觉指示标识时,用于实现:
在所述3D环境内显示所述至少一个身体部位对应的映射对象以及在所述至少一个身体部位对应的映射对象上标识所述定位点;
根据标识的所述定位点,在所述3D环境内生成所述视觉指示标识。
可选的,所述处理器在实现根据标识的所述定位点,在所述3D环境内生成所述视觉指示标识时,用于实现:
在所述至少一个身体部位对应的映射对象上标识腕部关节点;
根据标识的所述定位点和所述腕部关节点,在所述3D环境内生成所述视觉指示标识。
可选的,所述定位点包括所述用户的手部的手指关节点。
可选的,所述处理器在实现根据所述定位点的移动,映射生成视觉指示标识在所述3D环境内的运动时,用于实现:
获取所述定位点的移动方向和/或移动距离;
根据所述移动方向和/或移动距离,控制所述视觉指示标识在所述3D环境内的运动。
可选的,所述处理器还用于实现以下步骤:
在所述可穿戴设备与可移动平台通信连接时,根据所述定位点的移动,控制所述可移动平台的运动或姿态。
可选的,所述控制指令包括对象选择指令、确认指令或可移动平台的控制指令。
可选的,所述处理器在实现根据所述用户对所述定位点的操作,生成相应的控制指令时,用于实现:
在所述可穿戴设备与可移动平台通信连接时,根据所述用户对所述定位点的操作,生成所述可移动平台的控制指令,所述控制指令用于控制所述可移动平台停止移动或继续移动。
可选的,所述处理器在实现根据所述用户对所述定位点的操作,生成相应的控制指令时,用于实现:
在所述可穿戴设备与可移动平台未通信连接时,根据所述用户对所述定位点的操作,生成对象选择指令,所述对象选择指令用于选择所述3D环境中的对象。
需要说明的是,所属领域的技术人员可以清楚地了解到,为了描述的方便和简洁,上述描述的可穿戴设备的具体工作过程,可以参考前述可穿戴设备的人机交互方法实施例中的对应过程,在此不再赘述。
本申请实施例还提供一种存储介质,所述存储介质存储有计算机程序,所述计算机程序中包括程序指令,所述处理器执行所述程序指令,实现上述实施 例提供的可穿戴设备的人机交互方法的步骤。
其中,所述存储介质可以是前述任一实施例所述的可穿戴设备的内部存储单元,例如所述可穿戴设备的硬盘或内存。所述存储介质也可以是所述可穿戴设备的外部存储设备,例如所述可穿戴设备上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。
应当理解,在此本申请说明书中所使用的术语仅仅是出于描述特定实施例的目的而并不意在限制本申请。如在本申请说明书和所附权利要求书中所使用的那样,除非上下文清楚地指明其它情况,否则单数形式的“一”、“一个”及“该”意在包括复数形式。
还应当理解,在本申请说明书和所附权利要求书中使用的术语“和/或”是指相关联列出的项中的一个或多个的任何组合以及所有可能组合,并且包括这些组合。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到各种等效的修改或替换,这些修改或替换都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以权利要求的保护范围为准。

Claims (55)

  1. 一种可穿戴设备的人机交互方法,其特征在于,包括:
    通过所述可穿戴设备显示3D环境;
    获取所述可穿戴设备上设置的图像传感器捕获到的图像,并根据所述图像,识别穿戴所述可穿戴设备的用户至少一个身体部位的移动和/或操作;
    根据所述移动和/或操作,映射生成视觉指示标识在所述3D环境里的深度方向进行运动,所述视觉指示标识用于选择所述3D环境内的目标对象。
  2. 根据权利要求1所述的人机交互方法,其特征在于,所述根据所述移动和/或操作,映射生成视觉指示标识在所述3D环境里的深度方向进行运动,包括:
    当所述视觉指示标识在所述3D环境里的预设平面内不发生变化时,根据所述移动和/或操作,映射生成所述视觉指示标识在所述3D环境里的深度方向进行运动。
  3. 根据权利要求1或2所述的人机交互方法,其特征在于,所述根据所述移动,映射生成所述视觉指示标识在所述3D环境里的深度方向进行运动,包括:
    根据所述移动,确定所述至少一个身体部位相对所述可穿戴设备的位置变化信息;
    根据所述位置变化信息,映射生成所述视觉指示标识在所述3D环境里的深度方向进行运动。
  4. 根据权利要求1所述的人机交互方法,其特征在于,所述根据所述图像,识别穿戴所述可穿戴设备的用户至少一个身体部位的移动和/或操作,包括:
    根据所述图像,识别穿戴所述可穿戴设备的用户至少一个身体部位的当前姿态;
    在所述当前姿态为第一预设姿态时,通过所述图像传感器捕获到的图像,识别所述至少一个身体部位的移动和/或操作。
  5. 根据权利要求4所述的人机交互方法,其特征在于,所述方法还包括:
    在所述当前姿态为第一预设姿态时,在所述3D环境内显示所述视觉指示标识。
  6. 根据权利要求4或5所述的人机交互方法,其特征在于,所述方法还包 括:
    在所述当前姿态为第一预设姿态时,在所述3D环境内显示所述至少一个身体部位对应的映射对象以及在所述映射对象上标识所述视觉指示标识的控制区域。
  7. 根据权利要求6所述的人机交互方法,其特征在于,所述映射对象包括所述图像传感器捕获到的图像中的至少一个身体部位或所述至少一个身体部位对应的虚拟模型。
  8. 根据权利要求6所述的人机交互方法,其特征在于,所述控制区域包括所述映射对象的小臂内侧区域、小臂外侧区域、掌心区域、手背区域中的至少一项。
  9. 根据权利要求6所述的人机交互方法,其特征在于,所述方法还包括:
    通过所述图像传感器识别所述用户的手指对所述至少一个身体部位的滑动操作,映射生成所述手指对应的映射对象在所述控制区域内的滑动操作;
    根据所述手指对应的映射对象在所述控制区域内的滑动操作,控制所述视觉指示标识在所述3D环境里的水平方向或竖直方向进行运动。
  10. 根据权利要求4所述的人机交互方法,其特征在于,所述方法还包括:
    在所述当前姿态为第二预设姿态时,在所述3D环境内显示所述至少一个身体部位对应的映射对象;
    根据所述当前姿态,在显示的所述至少一个身体部位对应的映射对象上标识虚拟输入键盘;
    根据所述用户对所述至少一个身体部位的操作,映射生成用户对所述虚拟输入键盘的操作;
    根据生成的所述虚拟输入键盘的操作,生成相应的控制指令,并执行所述控制指令。
  11. 根据权利要求10所述的人机交互方法,其特征在于,所述根据所述当前姿态,在显示的所述至少一个身体部位对应的映射对象上标识虚拟输入键盘,包括:
    根据所述当前姿态,在显示的所述至少一个身体部位对应的映射对象上确定用于标识所述虚拟输入键盘的目标区域;
    在所述目标区域内标识所述当前姿态对应的虚拟输入键盘。
  12. 根据权利要求11所述的人机交互方法,其特征在于,所述目标区域包 括用户的手部对应的映射对象的部分关键点或全部关键点。
  13. 根据权利要求12所述的人机交互方法,其特征在于,所述在所述目标区域内标识所述当前姿态对应的虚拟输入键盘,包括:
    获取所述当前姿态对应的虚拟输入键盘中的每个虚拟输入按键与各所述关键点之间的映射关系;
    根据所述映射关系,将所述虚拟输入键盘中的每个虚拟输入按键显示在对应的关键点上,以形成所述虚拟输入键盘。
  14. 根据权利要求10所述的人机交互方法,其特征在于,所述方法还包括:
    在识别到所述至少一个身体部位的姿态由第二预设姿态变化为第三预设姿态时,在所述3D环境内显示所述至少一个身体部位对应的映射对象;
    在所述至少一个身体部位对应的映射对象上同时标识虚拟输入键盘和所述视觉指示标识的控制区域。
  15. 根据权利要求1所述的人机交互方法,其特征在于,所述视觉指示标识包括光标,所述可穿戴设备包括眼镜设备。
  16. 一种可穿戴设备的人机交互方法,其特征在于,包括:
    通过所述可穿戴设备显示3D环境;
    获取所述可穿戴设备上设置的图像传感器捕获到的图像,并根据所述图像,识别穿戴所述可穿戴设备的用户至少一个身体部位的定位点;
    根据所述定位点的移动,映射生成视觉指示标识在所述3D环境内的运动;
    根据所述用户对所述定位点的操作,生成相应的控制指令。
  17. 根据权利要求16所述的人机交互方法,其特征在于,所述视觉指示标识包括光标或光束。
  18. 根据权利要求16所述的人机交互方法,其特征在于,所述根据所述图像,识别穿戴所述可穿戴设备的用户至少一个身体部位的定位点,包括:
    根据所述图像,识别穿戴所述可穿戴设备的用户至少一个身体部位的当前姿态;
    在所述当前姿态为预设姿态时,通过所述图像传感器捕获到的图像,识别所述至少一个身体部位的定位点。
  19. 根据权利要求16所述的人机交互方法,其特征在于,所述方法还包括:
    根据所述定位点,在所述3D环境内生成所述视觉指示标识。
  20. 根据权利要求19所述的人机交互方法,其特征在于,所述根据所述定位点,在所述3D环境内生成所述视觉指示标识,包括:
    在所述3D环境内显示所述至少一个身体部位对应的映射对象以及在所述至少一个身体部位对应的映射对象上标识所述定位点;
    根据标识的所述定位点,在所述3D环境内生成所述视觉指示标识。
  21. 根据权利要求20所述的人机交互方法,其特征在于,所述根据标识的所述定位点,在所述3D环境内生成所述视觉指示标识,包括:
    在所述至少一个身体部位对应的映射对象上标识腕部关节点;
    根据标识的所述定位点和所述腕部关节点,在所述3D环境内生成所述视觉指示标识。
  22. 根据权利要求16所述的人机交互方法,其特征在于,所述定位点包括所述用户的手部的手指关节点。
  23. 根据权利要求16所述的人机交互方法,其特征在于,所述根据所述定位点的移动,映射生成视觉指示标识在所述3D环境内的运动,包括:
    获取所述定位点的移动方向和/或移动距离;
    根据所述移动方向和/或移动距离,控制所述视觉指示标识在所述3D环境内的运动。
  24. 根据权利要求16所述的人机交互方法,其特征在于,所述方法还包括:
    在所述可穿戴设备与可移动平台通信连接时,根据所述定位点的移动,控制所述可移动平台的运动或姿态。
  25. 根据权利要求16所述的人机交互方法,其特征在于,所述控制指令包括对象选择指令、确认指令或可移动平台的控制指令。
  26. 根据权利要求16所述的人机交互方法,其特征在于,所述根据所述用户对所述定位点的操作,生成相应的控制指令,包括:
    在所述可穿戴设备与可移动平台通信连接时,根据所述用户对所述定位点的操作,生成所述可移动平台的控制指令,所述控制指令用于控制所述可移动平台停止移动或继续移动。
  27. 根据权利要求16所述的人机交互方法,其特征在于,所述根据所述用户对所述定位点的操作,生成相应的控制指令,包括:
    在所述可穿戴设备与可移动平台未通信连接时,根据所述用户对所述定位点的操作,生成对象选择指令,所述对象选择指令用于选择所述3D环境中的对象。
  28. 一种可穿戴设备,其特征在于,所述可穿戴设备包括:显示装置、图像传感器、存储器和处理器;
    所述显示装置用于显示3D环境;
    所述图像传感器用于捕获图像;
    所述存储器用于存储计算机程序;
    所述处理器,用于执行所述计算机程序并在执行所述计算机程序时,实现以下步骤:
    获取所述图像传感器捕获到的图像,并根据所述图像,识别穿戴所述可穿戴设备的用户至少一个身体部位的移动和/或操作;
    根据所述移动和/或操作,映射生成视觉指示标识在所述3D环境里的深度方向进行运动。
  29. 根据权利要求28所述的可穿戴设备,其特征在于,所述处理器在实现根据所述移动和/或操作,映射生成视觉指示标识在所述3D环境里的深度方向进行运动时,用于实现:
    当所述视觉指示标识在所述3D环境里的预设平面不发生变化时,根据所述移动和/或操作,映射生成所述视觉指示标识在所述3D环境里的深度方向进行运动。
  30. 根据权利要求28或29所述的可穿戴设备,其特征在于,所述处理器在实现根据所述移动,映射生成所述视觉指示标识在所述3D环境里的深度方向进行运动时,用于实现:
    根据所述移动,确定所述至少一个身体部位相对所述可穿戴设备的位置变化信息;
    根据所述位置变化信息,映射生成所述视觉指示标识在所述3D环境里的深度方向进行运动。
  31. 根据权利要求28所述的可穿戴设备,其特征在于,所述处理器在实现根据所述图像,识别穿戴所述可穿戴设备的用户至少一个身体部位的移动和/或操作,包括:
    根据所述图像,识别穿戴所述可穿戴设备的用户至少一个身体部位的当前姿态;
    在所述当前姿态为第一预设姿态时,通过所述图像传感器捕获到的图像,识别所述至少一个身体部位的移动和/或操作。
  32. 根据权利要求31所述的可穿戴设备,其特征在于,所述处理器还用于实现以下步骤:
    在所述当前姿态为第一预设姿态时,在所述3D环境内显示所述视觉指示 标识。
  33. 根据权利要求31或32所述的可穿戴设备,其特征在于,所述处理器还用于实现以下步骤:
    在所述当前姿态为第一预设姿态时,在所述3D环境内显示所述至少一个身体部位对应的映射对象以及在所述映射对象上标识所述视觉指示标识的控制区域。
  34. 根据权利要求33所述的可穿戴设备,其特征在于,所述映射对象包括所述图像传感器捕获到的图像中的至少一个身体部位或所述至少一个身体部位对应的虚拟模型。
  35. 根据权利要求33所述的可穿戴设备,其特征在于,所述控制区域包括所述映射对象的小臂内侧区域、小臂外侧区域、掌心区域、手背区域中的至少一项。
  36. 根据权利要求33所述的可穿戴设备,其特征在于,所述处理器还用于实现以下步骤:
    通过所述图像传感器识别所述用户的手指对所述至少一个身体部位的滑动操作,映射生成所述手指对应的映射对象在所述控制区域内的滑动操作;
    根据所述手指对应的映射对象在所述控制区域内的滑动操作,控制所述视觉指示标识在所述3D环境里的水平方向或竖直方向进行运动。
  37. 根据权利要求31所述的可穿戴设备,其特征在于,所述处理器还用于实现以下步骤:
    在所述当前姿态为第二预设姿态时,在所述3D环境内显示所述至少一个身体部位对应的映射对象;
    根据所述当前姿态,在显示的所述至少一个身体部位对应的映射对象上标识虚拟输入键盘;
    根据所述用户对所述至少一个身体部位的操作,映射生成用户对所述虚拟输入键盘的操作;
    根据生成的用户对所述虚拟输入键盘的操作,生成相应的控制指令,并执行所述控制指令。
  38. 根据权利要求37所述的可穿戴设备,其特征在于,所述处理器在实现根据所述当前姿态,在显示的所述至少一个身体部位对应的映射对象上标识虚拟输入键盘时,用于实现:
    根据所述当前姿态,在显示的所述至少一个身体部位对应的映射对象上确 定用于标识所述虚拟输入键盘的目标区域;
    在所述目标区域内标识所述当前姿态对应的虚拟输入键盘。
  39. 根据权利要求38所述的可穿戴设备,其特征在于,所述目标区域包括用户的手部对应的映射对象的部分关键点或全部关键点。
  40. 根据权利要求39所述的可穿戴设备,其特征在于,所述处理器在实现在所述目标区域内标识所述当前姿态对应的虚拟输入键盘时,用于实现:
    获取所述当前姿态对应的虚拟输入键盘中的每个虚拟输入按键与各所述关键点之间的映射关系;
    根据所述映射关系,将所述虚拟输入键盘中的每个虚拟输入按键显示在对应的关键点上,以形成所述虚拟输入键盘。
  41. 根据权利要求37所述的可穿戴设备,其特征在于,所述处理器还用于实现以下步骤:
    在识别到所述至少一个身体部位的姿态由第二预设姿态变化为第三预设姿态时,在所述3D环境内显示所述至少一个身体部位对应的映射对象;
    在所述至少一个身体部位对应的映射对象上同时标识虚拟输入键盘和所述视觉指示标识的控制区域。
  42. 根据权利要求28所述的可穿戴设备,其特征在于,所述视觉指示标识包括光标,所述可穿戴设备包括眼镜设备。
  43. 一种可穿戴设备,其特征在于,所述可穿戴设备包括:显示装置、图像传感器、存储器和处理器;
    所述显示装置用于显示3D环境;
    所述图像传感器用于捕获图像;
    所述存储器用于存储计算机程序;
    所述处理器,用于执行所述计算机程序并在执行所述计算机程序时,实现以下步骤:
    获取所述图像传感器捕获到的图像,并根据所述图像,识别穿戴所述可穿戴设备的用户至少一个身体部位的定位点;
    根据所述定位点的移动,映射生成视觉指示标识在所述3D环境内的运动;
    根据所述用户对所述定位点的操作,生成相应的控制指令。
  44. 根据权利要求43所述的可穿戴设备,其特征在于,所述视觉指示标识包括光标或光束。
  45. 根据权利要求43所述的可穿戴设备,其特征在于,所述处理器在实现 根据所述图像,识别穿戴所述可穿戴设备的用户至少一个身体部位的定位点时,用于实现:
    根据所述图像,识别穿戴所述可穿戴设备的用户至少一个身体部位的当前姿态;
    在所述当前姿态为预设姿态时,通过所述图像传感器捕获到的图像,识别所述至少一个身体部位的定位点。
  46. 根据权利要求43所述的可穿戴设备,其特征在于,所述处理器还用于实现以下步骤:
    根据所述定位点,在所述3D环境内生成所述视觉指示标识。
  47. 根据权利要求46所述的可穿戴设备,其特征在于,所述处理器在实现根据所述定位点,在所述3D环境内生成所述视觉指示标识时,用于实现:
    在所述3D环境内显示所述至少一个身体部位对应的映射对象以及在所述至少一个身体部位对应的映射对象上标识所述定位点;
    根据标识的所述定位点,在所述3D环境内生成所述视觉指示标识。
  48. 根据权利要求47所述的可穿戴设备,其特征在于,所述处理器在实现根据标识的所述定位点,在所述3D环境内生成所述视觉指示标识时,用于实现:
    在所述至少一个身体部位对应的映射对象上标识腕部关节点;
    根据标识的所述定位点和所述腕部关节点,在所述3D环境内生成所述视觉指示标识。
  49. 根据权利要求43所述的可穿戴设备,其特征在于,所述定位点包括所述用户的手部的手指关节点。
  50. 根据权利要求43所述的可穿戴设备,其特征在于,所述处理器在实现根据所述定位点的移动,映射生成视觉指示标识在所述3D环境内的运动时,用于实现:
    获取所述定位点的移动方向和/或移动距离;
    根据所述移动方向和/或移动距离,控制所述视觉指示标识在所述3D环境内的运动。
  51. 根据权利要求43所述的可穿戴设备,其特征在于,所述处理器还用于实现以下步骤:
    在所述可穿戴设备与可移动平台通信连接时,根据所述定位点的移动,控制所述可移动平台的运动或姿态。
  52. 根据权利要求43所述的可穿戴设备,其特征在于,所述控制指令包括对象选择指令、确认指令或可移动平台的控制指令。
  53. 根据权利要求43所述的可穿戴设备,其特征在于,所述处理器在实现根据所述用户对所述定位点的操作,生成相应的控制指令时,用于实现:
    在所述可穿戴设备与可移动平台通信连接时,根据所述用户对所述定位点的操作,生成所述可移动平台的控制指令,所述控制指令用于控制所述可移动平台停止移动或继续移动。
  54. 根据权利要求43所述的可穿戴设备,其特征在于,所述处理器在实现根据所述用户对所述定位点的操作,生成相应的控制指令时,用于实现:
    在所述可穿戴设备与可移动平台未通信连接时,根据所述用户对所述定位点的操作,生成对象选择指令,所述对象选择指令用于选择所述3D环境中的对象。
  55. 一种存储介质,其特征在于,所述存储介质存储有计算机程序,所述计算机程序被处理器执行时使所述处理器实现如权利要求1-27中任一项所述的可穿戴设备的人机交互方法。
PCT/CN2022/082674 2022-03-24 2022-03-24 可穿戴设备的人机交互方法、可穿戴设备及存储介质 WO2023178586A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2022/082674 WO2023178586A1 (zh) 2022-03-24 2022-03-24 可穿戴设备的人机交互方法、可穿戴设备及存储介质
CN202280048813.0A CN117677919A (zh) 2022-03-24 2022-03-24 可穿戴设备的人机交互方法、可穿戴设备及存储介质

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2022/082674 WO2023178586A1 (zh) 2022-03-24 2022-03-24 可穿戴设备的人机交互方法、可穿戴设备及存储介质

Publications (1)

Publication Number Publication Date
WO2023178586A1 true WO2023178586A1 (zh) 2023-09-28

Family

ID=88099460

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/082674 WO2023178586A1 (zh) 2022-03-24 2022-03-24 可穿戴设备的人机交互方法、可穿戴设备及存储介质

Country Status (2)

Country Link
CN (1) CN117677919A (zh)
WO (1) WO2023178586A1 (zh)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102789313A (zh) * 2012-03-19 2012-11-21 乾行讯科(北京)科技有限公司 一种用户交互系统和方法
US20130050069A1 (en) * 2011-08-23 2013-02-28 Sony Corporation, A Japanese Corporation Method and system for use in providing three dimensional user interface
US20130147793A1 (en) * 2011-12-09 2013-06-13 Seongyeom JEON Mobile terminal and controlling method thereof
CN104331154A (zh) * 2014-08-21 2015-02-04 周谆 实现非接触式鼠标控制的人机交互方法和系统
US20180329209A1 (en) * 2016-11-24 2018-11-15 Rohildev Nattukallingal Methods and systems of smart eyeglasses

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130050069A1 (en) * 2011-08-23 2013-02-28 Sony Corporation, A Japanese Corporation Method and system for use in providing three dimensional user interface
US20130147793A1 (en) * 2011-12-09 2013-06-13 Seongyeom JEON Mobile terminal and controlling method thereof
CN102789313A (zh) * 2012-03-19 2012-11-21 乾行讯科(北京)科技有限公司 一种用户交互系统和方法
CN104331154A (zh) * 2014-08-21 2015-02-04 周谆 实现非接触式鼠标控制的人机交互方法和系统
US20180329209A1 (en) * 2016-11-24 2018-11-15 Rohildev Nattukallingal Methods and systems of smart eyeglasses

Also Published As

Publication number Publication date
CN117677919A (zh) 2024-03-08

Similar Documents

Publication Publication Date Title
CN109891368B (zh) 活动对象在增强和/或虚拟现实环境中的切换
US10384348B2 (en) Robot apparatus, method for controlling the same, and computer program
EP3548989B1 (en) Generating virtual notation surfaces with gestures in an augmented and/or virtual reality environment
CN108780360B (zh) 虚拟现实导航
KR101546654B1 (ko) 웨어러블 증강현실 환경에서 증강현실 서비스 제공 방법 및 장치
CN116097209A (zh) 人工现实交互模式的整合
JP2021528786A (ja) 視線に基づく拡張現実環境のためのインターフェース
JP2019530064A (ja) 仮想現実におけるロケーショングローブ
JP7455277B2 (ja) モーション信号とマウス信号を使用してホスト装置を制御するための電子装置
TW201816549A (zh) 虛擬實境場景下的輸入方法和裝置
Fang et al. Head-mounted display augmented reality in manufacturing: A systematic review
WO2023178586A1 (zh) 可穿戴设备的人机交互方法、可穿戴设备及存储介质
CN109960404B (zh) 一种数据处理方法及装置
CN113467625A (zh) 虚拟现实的控制设备、头盔和交互方法
WO2022166448A1 (en) Devices, methods, systems, and media for selecting virtual objects for extended reality interaction
Jung et al. Duplication based distance-free freehand virtual object manipulation
CN115494951A (zh) 交互方法、装置和显示设备
US20200285325A1 (en) Detecting tilt of an input device to identify a plane for cursor movement
Knödel et al. Navidget for immersive virtual environments
CN112162631B (zh) 一种交互设备、数据处理方法及介质
CN117784926A (zh) 控制装置、控制方法和计算机可读存储介质
CN113220110A (zh) 显示系统及方法
JP2023168750A (ja) 情報処理装置、情報処理方法、プログラム、および記憶媒体
CN115686328A (zh) 自由空间中的无接触交互方法、装置、电子设备及存储介质
CN111766959A (zh) 虚拟现实交互方法和虚拟现实交互装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22932651

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 202280048813.0

Country of ref document: CN