WO2016192438A1 - 一种体感交互系统激活方法、体感交互方法及系统 - Google Patents

一种体感交互系统激活方法、体感交互方法及系统 Download PDF

Info

Publication number
WO2016192438A1
WO2016192438A1 PCT/CN2016/076765 CN2016076765W WO2016192438A1 WO 2016192438 A1 WO2016192438 A1 WO 2016192438A1 CN 2016076765 W CN2016076765 W CN 2016076765W WO 2016192438 A1 WO2016192438 A1 WO 2016192438A1
Authority
WO
WIPO (PCT)
Prior art keywords
body part
human body
interaction system
somatosensory interaction
stereoscopic image
Prior art date
Application number
PCT/CN2016/076765
Other languages
English (en)
French (fr)
Inventor
黄源浩
肖振中
钟亮洪
许宏淮
林靖雄
Original Assignee
深圳奥比中光科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳奥比中光科技有限公司 filed Critical 深圳奥比中光科技有限公司
Publication of WO2016192438A1 publication Critical patent/WO2016192438A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer

Definitions

  • the invention relates to the field of human-computer interaction, and particularly relates to a somatosensory interaction system activation method, a somatosensory interaction method and a system.
  • Human-computer interaction technology refers to the technology of realizing human-machine dialogue in an efficient manner through input and output devices.
  • the existing interaction mode of human-computer interaction usually interacts with the machine system through external devices such as a mouse, a keyboard, a touch screen or a handle, and the machine system responds accordingly.
  • the technical problem to be solved by the present invention is to provide a somatosensory interaction system activation method, a somatosensory interaction method and a system, which can perform corresponding operations by sensing an action of a human body part without relying on an external input device.
  • a technical solution adopted by the present invention is to provide a somatosensory interaction method, the method comprising: collecting a three-dimensional stereoscopic image of a human body part in an activated state of the somatosensory interaction system; The stereoscopic image is processed, converted into an operation instruction, and a corresponding operation is performed according to the operation instruction.
  • the processing of the three-dimensional stereoscopic image of the human body part into the operation instruction comprises: extracting a feature of the three-dimensional image of the human body part to obtain a feature parameter, where the feature parameter includes a three-dimensional coordinate of the human body part And the spatial motion trajectory of the human body part; matching the characteristic parameter with the pre-stored characteristic parameter; and when the matching degree reaches a predetermined threshold, acquiring an instruction corresponding to the pre-stored characteristic parameter as the operation instruction.
  • the method further comprises: activating the somatosensory interaction system.
  • the activating the somatosensory interaction system includes: acquiring a three-dimensional stereoscopic image; processing the three-dimensional stereoscopic image to determine whether the three-dimensional stereoscopic image includes a three-dimensional stereoscopic image for activating a human body part of the somatosensory interaction system; When the three-dimensional stereoscopic image includes a stereoscopic image of a human body part for activating the somatosensory interaction system, the three-dimensional stereoscopic image of the human body part is processed and converted into an activation instruction; and the somatosensory interaction system is activated according to the activation instruction.
  • the processing of the three-dimensional stereoscopic image of the human body part into an activation command includes: extracting a feature of the three-dimensional image of the human body part to obtain a feature parameter, where the feature parameter includes a three-dimensional coordinate of the human body part And a spatial motion trajectory of the body part; matching the feature parameter with a pre-stored feature parameter that activates the somatosensory interaction system; and when the matching degree reaches a predetermined threshold, acquiring an instruction corresponding to the pre-stored feature parameter, Take the activation command.
  • the activating the the somatosensory interaction system further includes: displaying an icon on the screen that moves in synchronization with the body part.
  • the method further includes: prompting the user that the somatosensory interaction system is activated.
  • the prompting the user that the somatosensory interaction system is activated comprises: displaying a predetermined area of the screen in a highlighted state to prompt the user that the somatosensory interaction system is activated.
  • another technical solution adopted by the present invention is to provide a method for activating a somatosensory interaction system, the method comprising: acquiring a three-dimensional stereoscopic image; processing the three-dimensional stereoscopic image to determine the three-dimensional stereoscopic image Whether a stereoscopic image of a human body part for activating the somatosensory interaction system is included; when the three-dimensional stereoscopic image includes a stereoscopic image of a human body part for activating the somatosensory interaction system, the three-dimensional stereoscopic image of the human body part is processed and converted into an activation instruction The somatosensory interaction system is activated in accordance with the activation command.
  • a somatosensory interaction system the system includes an acquisition module, a conversion module, and a processing module, wherein: the acquisition module is used in an activation state of the somatosensory interaction system. And acquiring a three-dimensional stereoscopic image of the human body part; the conversion module is configured to process the three-dimensional stereoscopic graphic of the human body part, and is converted into an operation instruction; and the processing module is configured to perform a corresponding operation according to the operation instruction.
  • the conversion module includes a feature extraction unit, a matching unit, and an acquisition unit, wherein: the feature extraction unit is configured to perform feature extraction on the three-dimensional image of the human body part to acquire a feature parameter, where the feature parameter includes the human body a three-dimensional coordinate of the part and a spatial motion trajectory of the body part; the matching unit is configured to match the feature parameter with a pre-stored feature parameter; and the acquiring unit is configured to: when the matching degree reaches a predetermined threshold, acquire and The pre-stored feature parameter corresponding instruction is used as the operation instruction.
  • the system further includes an activation module for activating the somatosensory interaction system.
  • the activation module includes an acquisition unit, a determination unit, a conversion unit, and an activation unit, wherein: the acquisition unit is configured to collect a three-dimensional stereoscopic image of the human body part; the determination unit is configured to process the three-dimensional stereo image, and determine the three-dimensional image.
  • the conversion unit is configured to: when the three-dimensional stereoscopic image includes a stereoscopic image of a human body part for activating the somatosensory interaction system, the three-dimensional image of the human body part
  • the stereoscopic image is processed and converted into an activation command; the activation unit is configured to activate the somatosensory interaction system according to the activation instruction.
  • the system further comprises a display module, wherein the display module is configured to display an icon on the screen that moves synchronously with the human body part.
  • the system further includes a prompting module, the prompting module is configured to prompt the user that the somatosensory interaction system is activated.
  • the invention has the beneficial effects that, in the state of activating the somatosensory interaction system, the three-dimensional stereoscopic image of the human body part is processed, and the three-dimensional stereoscopic image of the human body part is processed and converted into an operation instruction according to the prior art.
  • the operation instruction performs the corresponding operation. In this way, it is not necessary to rely on the external input device, and by sensing the spatial action of the human body part, the corresponding operation can be performed, giving the user a better use experience.
  • FIG. 1 is a flowchart of a somatosensory interaction method according to an embodiment of the present invention
  • FIG. 2 is a flowchart of an activated somatosensory interaction system according to an embodiment of the present invention
  • FIG. 3 is a flowchart of a method for activating a somatosensory interaction system according to an embodiment of the present invention
  • FIG. 4 is a schematic structural diagram of a somatosensory interaction system according to an embodiment of the present invention.
  • FIG. 5 is a schematic structural diagram of an activation module of a somatosensory interaction system according to an embodiment of the present invention
  • FIG. 6 is a schematic structural diagram of a conversion module of a somatosensory interaction system according to an embodiment of the present invention.
  • FIG. 7 is a schematic structural diagram of another somatosensory interaction system according to an embodiment of the present invention.
  • FIG. 1 is a flowchart of a somatosensory interaction method according to an embodiment of the present invention. As shown in the figure, the somatosensory interaction method in this embodiment includes:
  • S101 collecting a three-dimensional stereoscopic image of a human body part in an activated state of the somatosensory interaction system
  • the somatosensory interaction system needs to be activated before the somatosensory interaction needs to be performed.
  • FIG. 2 is a flowchart of an activated somatosensory interaction system according to an embodiment of the present invention.
  • the activated somatosensory interaction system of this embodiment includes the following steps:
  • a three-dimensional stereoscopic image within a predetermined spatial range is acquired by a 3D sensor.
  • the acquired three-dimensional stereoscopic image includes all objects of the 3D sensor lens monitoring range.
  • the 3D sensor lens includes a table, a chair, and a person, and the acquired three-dimensional image includes all of these objects.
  • S202 processing the three-dimensional stereoscopic image, determining whether the three-dimensional stereoscopic image includes a three-dimensional stereoscopic image for activating a human body part of the somatosensory interaction system;
  • the three-dimensional stereoscopic image acquired by the 3D sensor is processed to determine whether the three-dimensional stereoscopic image includes a three-dimensional stereoscopic image for activating a human body part of the somatosensory interaction system. For example, if the preset human body part for activating the somatosensory interaction system is a human hand, it is recognized whether the human hand is included from the acquired three-dimensional stereoscopic image. If a three-dimensional stereoscopic image for activating a human body part of the somatosensory interaction system is included in the three-dimensional stereoscopic image, step S203 is performed.
  • S203 processing a three-dimensional stereoscopic image of a human body part, and converting into an activation instruction
  • the processing of the three-dimensional image of the human body part into the activation instruction specifically includes: extracting the feature parameters of the three-dimensional image of the human body part, and the characteristic parameters include the three-dimensional coordinates of the human body part and the spatial motion track of the human body part,
  • the feature parameter is matched with the feature parameter of the pre-stored activated somatosensory interaction system.
  • an instruction corresponding to the pre-stored feature parameter is acquired as an activation command.
  • the three-dimensional images of the collected human body parts are extracted, and the feature parameters are obtained, and the feature parameters include position parameters and motion track parameters.
  • the positional parameter that is, the spatial position of the human body part
  • the positional parameter is represented by three-dimensional coordinates, that is, the motion trajectory of the human body part in space.
  • the parameter extraction includes the actual three-dimensional coordinates X, Y, Z of the palm of the hand to determine the specific positional relationship between the palm and the 3D sensor. It also includes the spatial motion trajectory of the palm, that is, the motion trajectory of the grip.
  • the extracted feature parameters are matched with the pre-stored feature parameters for activating the somatosensory interaction system.
  • the somatosensory interaction system is activated through a palm motion.
  • the characteristic parameters of the pre-stored palm motion include A, B, and C.
  • a three-dimensional image of a palm is collected, and the extracted feature parameters include A', B', and C.
  • the extracted feature parameters include A', B', and C.
  • ', A', B', C' are matched with A, B, C, and it is judged whether the matching degree reaches a predetermined threshold.
  • the predetermined threshold is a value of a preset matching degree.
  • the predetermined threshold may be set to 80%.
  • the system may further determine that the three-dimensional stereoscopic image includes a system for activating the somatosensory interaction system. Whether the three-dimensional image of the human body part lasts for a predetermined time. When the predetermined time reaches the predetermined time, the instruction corresponding to the pre-stored feature parameter is acquired as the activation command. In this way, it is possible to effectively prevent the false trigger from activating the somatosensory interaction system.
  • the preset body part for activating the somatosensory intersection system is the palm of the hand.
  • a palm motion may be inadvertently made. After the system collects and recognizes the palm motion, it further determines whether the palm image contains the palm or not. Time, if it does not continue to reach the scheduled time, it can be judged that this is only a misoperation, then the somatosensory interaction system will not be activated.
  • the predetermined time here can be preset according to needs, for example, set to 10 seconds, 30 seconds, and the like.
  • a progress bar may be displayed on the screen to prompt the somatosensory interaction system activation state when the system collects and recognizes the human body part and does not reach the predetermined time.
  • the progress bar can display the speed of the system activation, the degree of completion, the amount of remaining unfinished tasks, and the processing time that may be required in real time.
  • the progress bar can be displayed in a rectangular strip. When the progress bar is full, it means that the somatosensory interaction system condition is activated and the somatosensory interaction system is activated. In this way, the user can have an intuitive understanding of the activation state of the somatosensory interaction system, and can also cause the user who misunderstands to stop the gesture action in time to avoid false triggering of the activation of the somatosensory interaction system.
  • the somatosensory interaction system is activated to enter the somatosensory interaction state.
  • the user may be prompted to activate the corresponding somatosensory interaction system.
  • it can be prompted by displaying the predetermined area of the screen in a highlighted state.
  • the predetermined area here may be a preset body sensing area corresponding to a plane area on the screen, such as an area of a certain area on the left side of the screen, or an area of a certain area on the right side of the screen. Of course, it can also be the entire screen.
  • the user may also be prompted by other means, such as by popping up a prompt that the somatosensory interaction system has been activated, or by a voice prompt, etc., which is not limited by the present invention.
  • an icon that moves in synchronization with the human body part is displayed on the screen.
  • the icon that moves synchronously with the human body part may be an icon similar to the human body part, for example, the human body part is a human hand, and the icon may be a hand shape icon. Of course, it can be other forms of icons, such as triangle icons, dot icons, and so on.
  • the icon on the screen follows the movement of the body part and moves correspondingly on the screen. For example, the human hand moves to the right in space, and the icon also moves to the right along the screen.
  • a three-dimensional image of the human body part within a predetermined spatial range is acquired by the 3D sensor.
  • the 3D sensor is capable of acquiring a three-dimensional image of the object in a spatial position, and the acquired three-dimensional image includes the spatial position coordinates of the object and the spatial motion trajectory.
  • the spatial motion trajectory described in the embodiments of the present invention includes the posture of the human body component and the specific action of the human body component. For example, if the user makes a fist gesture in front of the 3D body sensor and slides in the space range, the 3D body sensor collects the three-dimensional image of the user's hand, and extracts the feature of the three-dimensional image of the hand, that is, the hand is obtained. The distance from the 3D sensor to the 3D coordinates of the 3D sensor, as well as the gripping and sliding movements of the hand. The processing of other three-dimensional stereoscopic images is similar to this, and the present embodiment will not be described by way of example.
  • the human body part mentioned in the embodiment of the present invention may be a human hand. Of course, it can also be other human body parts for operation such as a human face, a human foot, and the like.
  • the processing of the three-dimensional image of the human body part into the operation instruction specifically includes: extracting the feature parameters of the three-dimensional image of the human body part, and the characteristic parameters include the three-dimensional coordinates of the human body part and the spatial motion track of the human body part,
  • the feature parameter is matched with the pre-stored feature parameter.
  • an instruction corresponding to the pre-stored feature parameter is acquired as an operation instruction.
  • the three-dimensional image of the collected human body part is subjected to feature extraction to obtain the feature parameters of the acquired three-dimensional space, wherein the feature parameters include the spatial three-dimensional coordinates of the human body part and the spatial motion track of the human body part.
  • feature extraction it is possible to recognize the specific spatial position of the body part from the 3D sensor and the action of the body part. For example, a gripping action by a human hand, by collecting the stereoscopic image of the grip, and by feature extraction, can determine the specific spatial position of the human hand according to the parameters extracted by the feature and recognize that the motion is a gripping action. .
  • the present invention includes a process of learning training to establish a training database. For example, in order to recognize the movement of a human hand, the system collects three-dimensional images of various gripping actions, and learns these different gripping actions to obtain specific feature parameters for identifying this specific action. For each different action, the system will do such a learning training process, and the specific characteristic parameters corresponding to various specific actions constitute a training database. When the system acquires a three-dimensional stereo image, the stereo image is extracted from the feature, and a specific action matching the same is found in the training database as a recognition result.
  • the system database pre-stores the feature parameters and the operation instructions that perform the corresponding operations.
  • the pre-stored feature parameters and the operation instructions for performing the corresponding operations include:
  • the characteristic parameter includes the spatial three-dimensional coordinates of the fist distance sensor and the relative motion trajectory of each finger space of the fist punching action, and the parameters are bound and stored with the operation instruction of the screen scrolling.
  • an instruction corresponding to the pre-stored feature parameter is acquired as an operation instruction.
  • the acquired three-dimensional stereoscopic image is an action of clenching a fist, and a scrolling screen instruction corresponding thereto is obtained.
  • the operation corresponding to the operation instruction is executed according to the acquired operation instruction. For example, if the scroll screen command is obtained, the screen scrolling is controlled.
  • the palm up motion control volume is adjusted, the palm down motion control volume is small, the screen on the right side of the screen, the sensing area, the fist movement, the upward movement, the screen brightness is increased, and the fist movement is moved downward to control the screen brightness adjustment. small.
  • the palm is detected in the predetermined sensing area on the left side of the screen and the palm moves upward, the volume is turned up, and the palm down motion control volume is turned down.
  • the upward movement control screen is brightened, and when the gripping action is detected, the downward movement control screen is darkened.
  • the action of clenching is detected in a predetermined area on the left side of the screen, or when the palm is detected in a predetermined area on the right side of the screen, no response is made.
  • the predetermined area is not set to correspond to different operations, only different actions may be set corresponding to different operations, and the entire screen is a sensing area, and as long as the action is sensed and matched with the preset action to reach a predetermined threshold, the execution corresponds to the action. Operation.
  • the predetermined threshold mentioned here is a threshold for measuring the degree of matching. You can set your own thresholds as needed. For example, when the matching degree is not high, the threshold value may be set to 50%, that is, as long as the matching degree of the extracted feature parameters and the pre-stored feature parameters reaches 50% or more, the pre-stored feature is executed. The operation corresponding to the parameter. If the matching degree is required to perform the action, the threshold can be increased accordingly. For example, if the threshold is set to 90%, the corresponding operation will be performed only if the matching degree reaches 90% or more.
  • the sensing area here may be a predetermined predetermined spatial area range, or may be an entire area of the signal that the 3D sensor can acquire.
  • the predetermined space range corresponding to the left side of the screen may be preset as the sensing area, and only the actions in the sensing area are recognized and responded, and the actions outside the sensing area are not responded. In the case where the sensing area is not set, the entire area in which the default 3D sensor can acquire signals is the sensing area.
  • the somatosensory interaction method provided by the embodiment of the present invention can be used to control screen scrolling, volume adjustment, brightness adjustment, screen scrolling speed adjustment, etc., of course, and is not limited thereto.
  • the invention is not illustrated by way of example.
  • the somatosensory interaction method collects a three-dimensional stereoscopic image of a human body part in a state of activating a somatosensory interaction system, and extracts feature parameters according to a three-dimensional stereoscopic image of the human body part, and acquires characteristic parameters and pre-stored characteristic parameters. A matching is performed, and when the matching degree reaches a predetermined threshold, an operation corresponding to the pre-stored feature parameter is performed. In this way, it is not necessary to rely on the external input device, and by sensing the spatial action of the human body part, the corresponding operation can be performed, giving the user a better use experience.
  • FIG. 3 is a flowchart of a method for activating a somatosensory interaction system according to an embodiment of the present invention.
  • the method for activating a somatosensory interaction system in this embodiment includes:
  • a three-dimensional stereoscopic image within a predetermined spatial range is acquired by a 3D sensor.
  • the acquired three-dimensional stereoscopic image includes all objects of the 3D sensor lens monitoring range.
  • the 3D sensor lens includes a table, a chair, and a person, and the acquired three-dimensional image includes all of these objects.
  • S302 processing the three-dimensional stereoscopic image, determining whether the three-dimensional stereoscopic image includes a three-dimensional stereoscopic image for activating a human body part of the somatosensory interaction system;
  • the three-dimensional stereoscopic image acquired by the 3D sensor is processed to determine whether the three-dimensional stereoscopic image includes a three-dimensional stereoscopic image for activating a human body part of the somatosensory interaction system. For example, if the preset human body part for activating the somatosensory interaction system is a human hand, it is recognized whether the human hand is included from the acquired three-dimensional stereoscopic image. If a three-dimensional stereoscopic image for activating a human body part of the somatosensory interaction system is included in the three-dimensional stereoscopic image, step S303 is performed.
  • the processing of the three-dimensional image of the human body part into the activation instruction specifically includes: extracting the feature parameters of the three-dimensional image of the human body part, and the characteristic parameters include the three-dimensional coordinates of the human body part and the spatial motion track of the human body part,
  • the feature parameter is matched with the feature parameter of the pre-stored activated somatosensory interaction system.
  • an instruction corresponding to the pre-stored feature parameter is acquired as an activation command.
  • the three-dimensional images of the collected human body parts are extracted, and the feature parameters are obtained, and the feature parameters include position parameters and motion track parameters.
  • the positional parameter that is, the spatial position of the human body part
  • the positional parameter is represented by three-dimensional coordinates, that is, the motion trajectory of the human body part in space.
  • the parameter extraction includes the actual three-dimensional coordinates X, Y, Z of the palm of the hand to determine the specific positional relationship between the palm and the 3D sensor. It also includes the spatial motion trajectory of the palm, that is, the motion trajectory of the grip.
  • the extracted feature parameters are matched with the pre-stored feature parameters for activating the somatosensory interaction system.
  • the somatosensory interaction system is activated through a palm motion.
  • the characteristic parameters of the pre-stored palm motion include A, B, and C.
  • a three-dimensional image of a palm is collected, and the extracted feature parameters include A', B', and C.
  • the extracted feature parameters include A', B', and C.
  • ', A', B', C' are matched with A, B, C, and it is judged whether the matching degree reaches a predetermined threshold.
  • the predetermined threshold is a value of a preset matching degree.
  • the predetermined threshold may be set to 80%.
  • the system may further determine that the three-dimensional stereoscopic image includes a system for activating the somatosensory interaction system. Whether the three-dimensional image of the human body part lasts for a predetermined time. When the predetermined time reaches the predetermined time, the instruction corresponding to the pre-stored feature parameter is acquired as the activation command. In this way, it is possible to effectively prevent the false trigger from activating the somatosensory interaction system.
  • the preset body part for activating the somatosensory intersection system is the palm of the hand.
  • a palm motion may be inadvertently made. After the system collects and recognizes the palm motion, it further determines whether the palm image contains the palm or not. Time, if it does not continue to reach the scheduled time, it can be judged that this is only a misoperation, then the somatosensory interaction system will not be activated.
  • the predetermined time here can be preset according to needs, for example, set to 10 seconds, 30 seconds, and the like.
  • a progress bar may be displayed on the screen to prompt the somatosensory interaction system activation state when the system collects and recognizes the human body part and does not reach the predetermined time.
  • the progress bar can display the speed of the system activation, the degree of completion, the amount of remaining unfinished tasks, and the processing time that may be required in real time.
  • the progress bar can be displayed in a rectangular strip. When the progress bar is full, it means that the somatosensory interaction system condition is activated and the somatosensory interaction system is activated. In this way, the user can have an intuitive understanding of the activation state of the somatosensory interaction system, and can also cause the user who misunderstands to stop the gesture action in time to avoid false triggering of the activation of the somatosensory interaction system.
  • S304 Activate the somatosensory interaction system according to the activation instruction.
  • the somatosensory interaction system is activated to enter the somatosensory interaction state.
  • the user may be prompted to activate the corresponding somatosensory interaction system.
  • it can be prompted by displaying the predetermined area of the screen in a highlighted state.
  • the predetermined area here may be a preset body sensing area corresponding to a plane area on the screen, such as an area of a certain area on the left side of the screen, or an area of a certain area on the right side of the screen. Of course, it can also be the entire screen.
  • the user may also be prompted by other means, such as by popping up a prompt that the somatosensory interaction system has been activated, or by a voice prompt, etc., which is not limited by the present invention.
  • an icon that moves in synchronization with the human body part is displayed on the screen.
  • the icon that moves synchronously with the human body part may be an icon similar to the human body part, for example, the human body part is a human hand, and the icon may be a hand shape icon. Of course, it can be other forms of icons, such as triangle icons, dot icons, and so on.
  • the icon on the screen follows the movement of the body part and moves correspondingly on the screen. For example, the human hand moves to the right in space, and the icon also moves to the right along the screen.
  • the somatosensory interaction system activation method of the embodiment can activate the somatosensory interaction system by acquiring a three-dimensional stereoscopic image within a predetermined spatial range and identifying a human body part for activating the somatosensory interaction system to acquire an activation instruction.
  • the activation method is flexible and convenient, giving the user a good activation experience. Moreover, during the activation process, the progress bar and the judgment of the duration can be used to effectively prevent the misoperation, and the user can have an intuitive understanding of the activation progress.
  • FIG. 4 is a schematic structural diagram of a somatosensory interaction system according to an embodiment of the present invention.
  • the somatosensory interaction system of the embodiment is used to perform the somatosensory interaction of the embodiment shown in FIG.
  • the somatosensory interaction system 100 includes an acquisition module 11, a conversion module 12, and a processing module 13, wherein:
  • the collecting module 11 is configured to collect a three-dimensional stereoscopic image of a human body part in an activated state of the somatosensory interaction system
  • the somatosensory interaction system needs to be activated before the somatosensory interaction needs to be performed.
  • the somatosensory interaction system of the present embodiment further includes an activation module 14 for activating the somatosensory interaction system.
  • FIG. 5 is a schematic structural diagram of an activation module according to an embodiment of the present invention.
  • the activation module 14 includes an acquisition unit 141, a determination unit 142, a conversion unit 143, and an activation unit 144, where:
  • the collecting unit 141 is configured to collect a three-dimensional stereoscopic image
  • the acquisition unit 141 acquires a three-dimensional stereoscopic image within a predetermined spatial range through a 3D sensor.
  • the acquired three-dimensional stereoscopic image includes all objects of the 3D sensor lens monitoring range.
  • the 3D sensor lens includes a table, a chair, and a person, and the acquired three-dimensional image includes all of these objects.
  • the determining unit 142 is configured to process the three-dimensional stereoscopic image, determine whether the three-dimensional stereoscopic image includes a three-dimensional stereoscopic image for activating a human body part of the somatosensory interaction system, and output the determination result to the conversion unit 143.
  • the determining unit 142 processes the three-dimensional stereoscopic image acquired by the 3D sensor, and determines whether the three-dimensional stereoscopic image includes a three-dimensional stereoscopic image for activating a human body part of the somatosensory interactive system. For example, if the preset human body part for activating the somatosensory interaction system is a human hand, it is recognized whether the human hand is included from the acquired three-dimensional stereoscopic image.
  • the conversion unit 143 is configured to process the three-dimensional stereo image of the human body part and convert it into an activation command.
  • the processing of the three-dimensional image of the human body part into the activation instruction specifically includes: extracting the feature parameters of the three-dimensional image of the human body part, and the characteristic parameters include the three-dimensional coordinates of the human body part and the spatial motion track of the human body part,
  • the feature parameter is matched with the feature parameter of the pre-stored activated somatosensory interaction system.
  • an instruction corresponding to the pre-stored feature parameter is acquired as an activation command.
  • the three-dimensional images of the collected human body parts are extracted, and the feature parameters are obtained, and the feature parameters include position parameters and motion track parameters.
  • the positional parameter that is, the spatial position of the human body part
  • the positional parameter is represented by three-dimensional coordinates, that is, the motion trajectory of the human body part in space.
  • the parameter extraction includes the actual three-dimensional coordinates X, Y, Z of the palm of the hand to determine the specific positional relationship between the palm and the 3D sensor. It also includes the spatial motion trajectory of the palm, that is, the motion trajectory of the grip.
  • the extracted feature parameters are matched with the pre-stored feature parameters for activating the somatosensory interaction system.
  • the somatosensory interaction system is activated through a palm motion.
  • the characteristic parameters of the pre-stored palm motion include A, B, and C.
  • a three-dimensional image of a palm is collected, and the extracted feature parameters include A', B', and C.
  • the extracted feature parameters include A', B', and C.
  • ', A', B', C' are matched with A, B, C, and it is judged whether the matching degree reaches a predetermined threshold.
  • the predetermined threshold is a value of a preset matching degree.
  • the predetermined threshold may be set to 80%.
  • the system may further determine that the three-dimensional stereoscopic image includes a system for activating the somatosensory interaction system. Whether the three-dimensional image of the human body part lasts for a predetermined time. When the predetermined time reaches the predetermined time, the instruction corresponding to the pre-stored feature parameter is acquired as the activation command. In this way, it is possible to effectively prevent the false trigger from activating the somatosensory interaction system.
  • the preset body part for activating the somatosensory intersection system is the palm of the hand.
  • a palm motion may be inadvertently made. After the system collects and recognizes the palm motion, it further determines whether the palm image contains the palm or not. Time, if it does not continue to reach the scheduled time, it can be judged that this is only a misoperation, then the somatosensory interaction system will not be activated.
  • the predetermined time here can be preset according to needs, for example, set to 10 seconds, 30 seconds, and the like.
  • a progress bar may be displayed on the screen to prompt the somatosensory interaction system activation state when the system collects and recognizes the human body part and does not reach the predetermined time.
  • the progress bar can display the speed of the system activation, the degree of completion, the amount of remaining unfinished tasks, and the processing time that may be required in real time.
  • the progress bar can be displayed in a rectangular strip. When the progress bar is full, it means that the somatosensory interaction system condition is activated and the somatosensory interaction system is activated. In this way, the user can have an intuitive understanding of the activation state of the somatosensory interaction system, and can also cause the user who misunderstands to stop the gesture action in time to avoid false triggering of the activation of the somatosensory interaction system.
  • the activation unit 144 is configured to activate the somatosensory interaction system in accordance with the activation instruction.
  • the somatosensory interaction system is activated to enter the somatosensory interaction state.
  • the acquisition module 11 collects a three-dimensional stereoscopic image of a human body part within a predetermined spatial range by using a 3D sensor in an activated state of the somatosensory interaction system.
  • the 3D sensor is capable of acquiring a three-dimensional image of the object in a spatial position, and the acquired three-dimensional image includes the spatial position coordinates of the object and the spatial motion trajectory.
  • the spatial motion trajectory described in the embodiments of the present invention includes the posture of the human body component and the specific action of the human body component. For example, if the user makes a fist gesture in front of the 3D body sensor and slides in the space range, the 3D body sensor collects the three-dimensional image of the user's hand, and extracts the feature of the three-dimensional image of the hand, that is, the hand is obtained. The distance from the 3D sensor to the 3D coordinates of the 3D sensor, as well as the gripping and sliding movements of the hand. The processing of other three-dimensional stereoscopic images is similar to this, and the present embodiment will not be described by way of example.
  • the human body part mentioned in the embodiment of the present invention may be a human hand. Of course, it can also be other human body parts for operation such as a human face, a human foot, and the like.
  • the conversion module 12 is configured to process a three-dimensional stereo image of a human body part and convert it into an operation instruction.
  • FIG. 6 is a schematic structural diagram of a conversion module 12 according to an embodiment of the present invention. As shown in the figure, the conversion module 12 further includes a feature extraction unit 121, a matching unit 122, and an acquisition unit 123, where:
  • the feature extraction unit 121 is configured to perform feature extraction on the three-dimensional image of the human body part to acquire feature parameters, where the feature parameters include three-dimensional coordinates of the human body part and spatial motion trajectories of the human body part.
  • the feature extraction unit 121 performs feature extraction on the acquired three-dimensional image of the human body part to acquire the feature parameters of the acquired three-dimensional space, wherein the feature parameters include the spatial three-dimensional coordinates of the human body part and the space of the human body part. Movement track.
  • feature extraction it is possible to recognize the motion of a human body part. For example, a gripping action by a human hand can recognize the motion as a gripping action according to the parameter extracted by the feature by collecting the stereoscopic image of the grip and extracting the feature.
  • the present invention includes a process of learning training to establish a training database. For example, in order to recognize the movement of a human hand, the system collects three-dimensional images of various gripping actions, and learns these different gripping actions to obtain specific feature parameters for identifying this specific action. For each different action, the system will do such a learning training process, and the specific characteristic parameters corresponding to various specific actions constitute a training database. When the system acquires a three-dimensional stereo image, the stereo image is extracted from the feature, and a specific action matching the same is found in the training database as a recognition result.
  • the matching unit 122 is configured to match the feature parameter with the pre-stored feature parameter
  • the system database pre-stores the feature parameters and the operation instructions that perform the corresponding operations.
  • the pre-stored feature parameters and the operation instructions for performing the corresponding operations include:
  • the characteristic parameter includes the spatial three-dimensional coordinates of the fist distance sensor and the relative motion trajectory of each finger space of the fist punching action, and the parameters are bound and stored with the operation instruction of the screen scrolling.
  • the obtaining unit 123 is configured to acquire an instruction corresponding to the pre-stored feature parameter as an operation instruction when the matching degree reaches a predetermined threshold.
  • the obtaining unit 124 acquires an instruction corresponding to the pre-stored feature parameter as an operation instruction.
  • the acquired three-dimensional stereoscopic image is an action of clenching a fist, and a scrolling screen instruction corresponding thereto is obtained.
  • the processing module 13 is configured to perform a corresponding operation according to the operation instruction.
  • the processing module 13 controls execution of an operation corresponding to the operation instruction. For example, if the scroll screen command is obtained, the screen scrolling is controlled.
  • the palm up motion control volume is adjusted, the palm down motion control volume is small, the screen on the right side of the screen, the sensing area, the fist movement, the upward movement, the screen brightness is increased, and the fist movement is moved downward to control the screen brightness adjustment. small.
  • the palm is detected in the predetermined sensing area on the left side of the screen and the palm moves upward, the volume is turned up, and the palm down motion control volume is turned down.
  • the upward movement control screen is brightened, and when the gripping action is detected, the downward movement control screen is darkened.
  • the action of clenching is detected in a predetermined area on the left side of the screen, or when the palm is detected in a predetermined area on the right side of the screen, no response is made.
  • the predetermined threshold mentioned here is a threshold for measuring the degree of matching. You can set your own thresholds as needed. For example, when the matching degree is not high, the threshold value may be set to 50%, that is, as long as the matching degree of the extracted feature parameters and the pre-stored feature parameters reaches 50% or more, the pre-stored feature is executed. The operation corresponding to the parameter. If the matching degree is required to perform the action, the threshold can be increased accordingly. For example, if the threshold is set to 90%, the corresponding operation will be performed only if the matching degree reaches 90% or more.
  • the sensing area here may be a predetermined predetermined spatial area range, or may be an entire area of the signal that the 3D sensor can acquire.
  • the predetermined space range corresponding to the left side of the screen may be preset as the sensing area, and only the actions in the sensing area are recognized and responded, and the actions outside the sensing area are not responded. In the case where the sensing area is not set, the entire area in which the default 3D sensor can acquire signals is the sensing area.
  • the somatosensory interaction system of the present embodiment further includes a display module 15 for displaying an icon moving synchronously with the human body part on the screen after the somatosensory interaction system is activated.
  • the icon that moves synchronously with the human body part may be an icon similar to the human body part, for example, the human body part is a human hand, and the icon may be a hand shape icon. Of course, it can be other forms of icons, such as triangle icons, dot icons, and so on.
  • the icon on the screen follows the movement of the body part and moves correspondingly on the screen. For example, the human hand moves to the right in space, and the icon also moves to the right along the screen.
  • the somatosensory interaction system of this embodiment further includes a prompting module 16 for prompting the user that the somatosensory interaction system is activated.
  • the user may be prompted to activate the corresponding somatosensory interaction system.
  • it can be prompted by displaying the predetermined area of the screen in a highlighted state.
  • the predetermined area here may be a preset somatosensory sensing area, such as an area of a certain area on the left side of the screen, or an area of a certain area on the right side of the screen. Of course, it can also be the entire screen.
  • the user may also be prompted by other means, such as by popping up a prompt that the somatosensory interaction system has been activated, or by a voice prompt, etc., which is not limited by the present invention.
  • the somatosensory interaction method provided by the embodiment of the present invention can be used to control screen scrolling, volume adjustment, brightness adjustment, screen scrolling speed adjustment, etc., of course, and is not limited thereto.
  • the invention is not illustrated by way of example.
  • FIG. 7 is a schematic structural diagram of another somatosensory interaction system according to an embodiment of the present invention.
  • the somatosensory interaction system of the embodiment is used to execute the activation method of the somatosensory interaction system in the embodiment shown in FIG.
  • the somatosensory interaction system 200 of the present embodiment includes an acquisition module 21, a determination module 22, a conversion module 23, and an activation module 24, where:
  • the acquisition module 21 is configured to collect a three-dimensional stereoscopic image
  • the acquisition module 21 collects a three-dimensional stereoscopic image within a predetermined spatial range through a 3D sensor.
  • the acquired three-dimensional stereoscopic image includes all objects of the 3D sensor lens monitoring range.
  • the 3D sensor lens includes a table, a chair, and a person, and the acquired three-dimensional image includes all of these objects.
  • the determining module 22 is configured to process the three-dimensional stereoscopic image, determine whether the three-dimensional stereoscopic image includes a stereoscopic image of the human body part for activating the somatosensory interactive system, and output the determination result to the conversion module 23;
  • the determining module 22 processes the three-dimensional stereoscopic image acquired by the 3D sensor, and determines whether the three-dimensional stereoscopic image includes a three-dimensional stereoscopic image for activating a human body part of the somatosensory interaction system. For example, if the preset human body part for activating the somatosensory interaction system is a human hand, it is recognized whether the human hand is included from the acquired three-dimensional stereoscopic image.
  • the conversion module 23 is configured to process the three-dimensional stereoscopic image of the human body part and convert it into an activation instruction when the three-dimensional stereoscopic image includes a stereoscopic image of the human body part for activating the somatosensory interaction system;
  • the conversion module 23 processes the three-dimensional stereoscopic image of the human body part, and the conversion into the activation instruction specifically includes: extracting the feature parameters of the three-dimensional stereoscopic image of the human body part, and the characteristic parameters include the three-dimensional coordinates of the human body part and the spatial motion of the human body part.
  • the trajectory matches the characteristic parameter with the pre-existing characteristic parameter of the activated somatosensory interaction system. When the matching degree reaches a predetermined threshold, an instruction corresponding to the pre-stored characteristic parameter is acquired as an activation instruction.
  • the three-dimensional images of the collected human body parts are extracted, and the feature parameters are obtained, and the feature parameters include position parameters and motion track parameters.
  • the positional parameter that is, the spatial position of the human body part
  • the positional parameter is represented by three-dimensional coordinates, that is, the motion trajectory of the human body part in space.
  • the parameter extraction includes the actual three-dimensional coordinates X, Y, Z of the palm of the hand to determine the specific positional relationship between the palm and the 3D sensor. It also includes the spatial motion trajectory of the palm, that is, the motion trajectory of the grip.
  • the extracted feature parameters are matched with the pre-stored feature parameters for activating the somatosensory interaction system.
  • the somatosensory interaction system is activated through a palm motion.
  • the characteristic parameters of the pre-stored palm motion include A, B, and C.
  • a three-dimensional image of a palm is collected, and the extracted feature parameters include A', B', and C.
  • the extracted feature parameters include A', B', and C.
  • ', A', B', C' are matched with A, B, C, and it is judged whether the matching degree reaches a predetermined threshold.
  • the predetermined threshold is a value of a preset matching degree.
  • the predetermined threshold may be set to 80%.
  • the system may further determine that the three-dimensional stereoscopic image includes a system for activating the somatosensory interaction system. Whether the three-dimensional image of the human body part lasts for a predetermined time. When the predetermined time reaches the predetermined time, the instruction corresponding to the pre-stored feature parameter is acquired as the activation command. In this way, it is possible to effectively prevent the false trigger from activating the somatosensory interaction system.
  • the preset body part for activating the somatosensory intersection system is the palm of the hand.
  • a palm motion may be inadvertently made. After the system collects and recognizes the palm motion, it further determines whether the palm image contains the palm or not. Time, if it does not continue to reach the scheduled time, it can be judged that this is only a misoperation, then the somatosensory interaction system will not be activated.
  • the predetermined time here can be preset according to needs, for example, set to 10 seconds, 30 seconds, and the like.
  • a progress bar may be displayed on the screen to prompt the somatosensory interaction system activation state when the system collects and recognizes the human body part and does not reach the predetermined time.
  • the progress bar can display the speed of the system activation, the degree of completion, the amount of remaining unfinished tasks, and the processing time that may be required in real time.
  • the progress bar can be displayed in a rectangular strip. When the progress bar is full, it means that the somatosensory interaction system condition is activated and the somatosensory interaction system is activated. In this way, the user can have an intuitive understanding of the activation state of the somatosensory interaction system, and can also cause the user who misunderstands to stop the gesture action in time to avoid false triggering of the activation of the somatosensory interaction system.
  • the activation module 24 activates the somatosensory interaction system in accordance with the activation command.
  • the activation module 24 activates the somatosensory interaction system to enter the somatosensory interaction state according to the acquired activation instruction.
  • the somatosensory interaction system of this embodiment may further include a prompting module 25 for prompting the user that the somatosensory interaction system is activated after the somatosensory interaction system is activated.
  • the user may be prompted to activate the corresponding somatosensory interaction system.
  • it can be prompted by displaying the predetermined area of the screen in a highlighted state.
  • the predetermined area here may be a preset body sensing area corresponding to a plane area on the screen, such as an area of a certain area on the left side of the screen, or an area of a certain area on the right side of the screen. Of course, it can also be the entire screen.
  • the user may also be prompted by other means, such as by popping up a prompt that the somatosensory interaction system has been activated, or by a voice prompt, etc., which is not limited by the present invention.
  • the somatosensory interaction system of the present embodiment may further include a display module 26 for displaying an icon moving synchronously with the human body part on the screen after the somatosensory interaction system is activated.
  • the icon that moves synchronously with the human body part may be an icon similar to the human body part, for example, the human body part is a human hand, and the icon may be a hand shape icon. Of course, it can be other forms of icons, such as triangle icons, dot icons, and so on.
  • the icon on the screen follows the movement of the body part and moves correspondingly on the screen. For example, the human hand moves to the right in space, and the icon also moves to the right along the screen.
  • the somatosensory interaction system can activate the somatosensory interaction system by acquiring a three-dimensional stereoscopic image within a predetermined spatial range and identifying a human body part for activating the somatosensory interaction system to acquire an activation instruction.
  • the activation method is flexible and convenient, giving the user a good activation experience. Moreover, during the activation process, the progress bar and the judgment of the duration can be used to effectively prevent the misoperation, and the user can have an intuitive understanding of the activation progress.
  • the embodiment of the present invention further provides an electronic device, where the electronic device includes the somatosensory interaction system described in the foregoing embodiment.
  • the electronic device can be, but is not limited to, a smart TV, a smart phone, a tablet computer, a notebook computer, and the like.
  • the method and system for the somatosensory interaction provided by the embodiment of the present invention respond to a predetermined operation by collecting a three-dimensional stereoscopic image in a spatial range and performing image feature extraction and matching.
  • the human-computer interaction is no longer dependent on a specific input device, and the user can control the smart device without contacting the input and output device, thereby making the human-computer interaction more real and convenient, and giving the user a better experience. .
  • the disclosed system, apparatus, and method may be implemented in other manners.
  • the device embodiments described above are merely illustrative.
  • the division of the modules or units is only a logical function division.
  • there may be another division manner for example, multiple units or components may be used. Combinations can be integrated into another system, or some features can be ignored or not executed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, and may be in an electrical, mechanical or other form.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the above integrated unit can be implemented in the form of hardware or in the form of a software functional unit.
  • the integrated unit if implemented in the form of a software functional unit and sold or used as a standalone product, may be stored in a computer readable storage medium.
  • a computer readable storage medium A number of instructions are included to cause a computer device (which may be a personal computer, server, or network device, etc.) or a processor to perform all or part of the steps of the methods described in various embodiments of the present application.
  • the foregoing storage medium includes: a U disk, a mobile hard disk, a read only memory (ROM, Read-Only) Memory, random access memory (RAM), disk or optical disk, and other media that can store program code.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)
  • Image Processing (AREA)

Abstract

本发明公开了一种体感交互系统激活方法、体感交互方法及系统。其中,体感交互方法包括:在体感交互系统激活状态下,采集人体部位的三维立体图像;对人体部位的三维立体图像进行处理,转化为操作指令,根据操作指令执行对应的操作。通过上述方式,本发明能够不需要依赖于外部输入设备,通过感应人体部位的空间动作,就能执行相应的操作,给用户更好的使用体验。

Description

一种体感交互系统激活方法、体感交互方法及系统
【技术领域】
本发明人机交互领域,具体涉及一种体感交互系统激活方法、体感交互方法及系统。
【背景技术】
人机交互技术是指通过输入输出设备,以有效的方式实现人与机器对话的技术。现有的人机交互的交互方式通常是通过鼠标、键盘、触摸屏或者手柄等外部设备与机器系统进行交互,机器系统再做出相应的响应。
但是,通过鼠标、键盘、触摸屏或者手柄等外部设备作为输入设备,存在明确局限性,这样的方式,用户必须与输入设备进行直接接触才能完成输入,使得交互的完成必须依赖于外部设备,束缚用户的行为方式,具体实现显得不自然,不真实。
【发明内容】
本发明主要解决的技术问题是提供一种体感交互系统激活方法、体感交互方法及系统,能够不需要依赖外部输入设备,通过感应人体部位的动作,就能执行相应的操作。
为解决上述技术问题,本发明采用的一个技术方案是:提供一种体感交互方法,所述方法包括:在体感交互系统激活状态下,采集人体部位的三维立体图像;对所述人体部位的三维立体图像进行处理,转化为操作指令;根据所述操作指令执行对应的操作。
其中,所述对所述人体部位的三维立体图像进行处理,转化为操作指令包括:将所述人体部位的三维立体图像进行特征提取获取特征参数,所述特征参数包括所述人体部位的三维坐标以及所述人体部位的空间运动轨迹;将所述特征参数与预存的特征参数进行匹配;当匹配度达到预定阈值时,获取与所述预存的特征参数对应的指令,以作为所述操作指令。
其中,所述方法还包括:激活所述体感交互系统。
其中,所述激活所述体感交互系统包括:采集三维立体图像;对所述三维立体图像进行处理,判断所述三维立体图像是否包含用于激活体感交互系统的人体部位的三维立体图像;当所述三维立体图像包含用于激活体感交互系统的人体部位立体图像时,对所述人体部位的三维立体图像进行处理,转化为激活指令;根据所述激活指令激活所述体感交互系统。
其中,所述对所述人体部位的三维立体图像进行处理,转化为激活指令包括:将所述人体部位的三维立体图像进行特征提取获取特征参数,所述特征参数包括所述人体部位的三维坐标以及所述人体部位的空间运动轨迹;将所述特征参数与预存的激活所述体感交互系统的特征参数进行匹配;当匹配度达到预定阈值时,获取与所述预存的特征参数对应的指令,以作为所述激活指令。
其中,所述激活所述体感交互系统之后还包括:在屏幕上显示与所述人体部位同步移动的图标。
其中,所述激活所述体感交互系统之后,还包括:提示用户所述体感交互系统被激活。
其中,提示用户所述体感交互系统被激活包括:将屏幕预定区域以高亮状态进行显示以提示用户所述体感交互系统被激活。
为解决上述技术问题,本发明采用的另一个技术方案是:提供一种体感交互系统激活方法,所述方法包括:采集三维立体图像;对所述三维立体图像进行处理,判断所述三维立体图像是否包含用于激活体感交互系统的人体部位立体图像;当所述三维立体图像包含用于激活体感交互系统的人体部位立体图像时,对所述人体部位的三维立体图像进行处理,转化为激活指令;根据所述激活指令激活所述体感交互系统。
为解决上述技术问题,本发明采用的又一个技术方案是:提供一种体感交互系统,所述系统包括采集模块、转化模块以及处理模块,其中:所述采集模块用于在体感交互系统激活状态下,采集人体部位的三维立体图像;所述转化模块用于对所述人体部位的三维立体图形进行处理,转化为操作指令;所述处理模块用于根据所述操作指令执行对应的操作。
其中,所述转化模块包括特征提取单元、匹配单元以及获取单元,其中:所述特征提取单元用于将所述人体部位的三维立体图像进行特征提取获取特征参数,所述特征参数包括所述人体部位的三维坐标以及所述人体部位的空间运动轨迹;所述匹配单元用于将所述特征参数与预存的特征参数进行匹配;所述获取单元用于,当匹配度达到预定阈值时,获取与所述预存的特征参数对应的指令,以作为所述操作指令。
其中,所述系统还包括激活模块,所述激活模块用于激活所述体感交互系统。
其中,所述激活模块包括采集单元、判断单元、转化单元以及激活单元,其中:采集单元用于采集人体部位的三维立体图像;判断单元用于对所述三维立体图像进行处理,判断所述三维立体图像是否包含用于激活体感交互系统的人体部位立体图像;所述转化单元用于,当所述三维立体图像包含用于激活体感交互系统的人体部位立体图像时,对所述人体部位的三维立体图像进行处理,转化为激活指令;所述激活单元用于根据所述激活指令激活所述体感交互系统。
其中,所述系统还包括显示模块,所述显示模块用于在屏幕上显示与所述人体部位同步移动的图标。
其中,所述系统还包括提示模块,所述提示模块用于提示用户所述体感交互系统被激活。
本发明的有益效果是:区别于现有技术的情况,本发明在激活体感交互系统的状态下,采集人体部位的三维立体图像,对人体部位的三维立体图像进行处理,转化为操作指令,根据操作指令执行对应的操作。通过这样的方式,不需要依赖于外部输入设备,通过感应人体部位的空间动作,就能执行相应的操作,给用户更好的使用体验。
【附图说明】
图1是本发明实施例提供的体感交互方法的流程图;
图2是本发明实施例提供的激活体感交互系统的流程图;
图3是本发明实施例提供的体感交互系统激活方法的流程图;
图4是本发明实施例提供的一种体感交互系统的结构示意图;
图5是本发明实施例提供的体感交互系统的激活模块的结构示意图;
图6是本发明实施例提供的体感交互系统的转化模块的结构示意图;
图7是本发明实施例提供的另一种体感交互系统的结构示意图。
【具体实施方式】
请参阅图1,图1是本发明实施例提供的一种体感交互方法的流程图,如图所示,本实施例的体感交互方法包括:
S101:在体感交互系统激活状态下,采集人体部位的三维立体图像;
在本发明实施例中,需要进行体感交互之前,需要先激活体感交互系统。
其中,请参阅图2,图2是本发明实施例提供的激活体感交互系统的流程图,本实施例的激活体感交互系统包括以下步骤:
S201:采集三维立体图像;
通过3D传感器采集预定空间范围内的三维立体图像。所采集的三维立体图像包括3D传感器镜头监控范围的所有物体。比如3D传感器镜头前包括桌子、椅子以及人,那么所采集的三维立体图像包括所有的这些物件。
S202:对三维立体图像进行处理,判断三维立体图像是否包含用于激活体感交互系统的人体部位的三维立体图像;
对3D体感器采集的三维立体图形进行处理,判断该三维立体图像中是否包含用于激活体感交互系统的人体部位的三维立体图像。比如预设的用于激活体感交互系统的人体部位为人手,则从采集的三维立体图像中识别是否包括人手。如果三维立体图像中包括用于激活体感交互系统的人体部位的三维立体图像,则执行步骤S203。
S203:对人体部位的三维立体图像进行处理,转化为激活指令;
其中,对人体部位的三维立体图像进行处理,转化为激活指令具体包括:将人体部位的三维立体图像进行特征提取获取特征参数,特征参数包括人体部位的三维坐标以及人体部位的空间运动轨迹,将特征参数与预存的激活体感交互系统的特征参数进行匹配,当匹配度达到预定阈值时,获取与预存的特征参数对应的指令,以作为激活指令。
将采集的人体部位的三维立体图像进行特征提取,获取得到特征参数,特征参数包括位置参数以及动作轨迹参数。位置参数即人体部位所处的空间位置,用三维坐标表示,运动轨迹即人体部位在空间上的运动轨迹。比如一个手掌抓握的动作,其参数提取即包括手掌当前所处的实际三维坐标X、Y、Z的具体数值,以确定手掌与3D传感器的具体位置关系。还包括手掌的空间运动轨迹即抓握的动作轨迹。
在提取得到特征参数以后,将提取的特征参数与预存的用于激活体感交互系统的特征参数进行匹配。
比如通过一个手掌的动作来激活体感交互系统,预存的手掌的动作的特征参数包括A、B、C,当前采集到一个手掌的三维立体图像,提取得到的特征参数包括A'、B'、C',将A'、B'、C'与A、B、C进行匹配,并判断匹配度是否达到预定阈值。
当提取的特征参数与预存的用于激活体感交互系统的特征参数的匹配度达到预定阈值时,获取与预存的特征参数对应的指令,以作为激活指令。预定阈值是预先设置的匹配程度的值,比如可以设置预定阈值为80%,当匹配度达到80%或以上,即获取与预存的特征参数对应的指令,以作为激活指令。
作为一种优选的实现方案,在提取的特征参数与预存的用于激活体感交互系统的特征参数的匹配度达到预定阈值时,系统可以进一步判断所述三维立体图像中包含用于激活体感交互系统的人体部件的三维立体图像所持续的时间是否达到预定时间。在持续的时间达到预定时间时,才获取与预存的特征参数对应的指令,以作为激活指令。通过这样的方式,可以有效防止误触发激活体感交互系统。
比如预设的用于激活体感交行系统的人体部件为手掌。当3D体感器前面用户当前正与另一个人在聊天,过程中可能不经意的会做出一个手掌的动作,系统在采集并识别出这个手掌动作之后,进一步判断立体图像中包含该手掌是否持续预定时间,如果没有持续达到预定时间,可以判断到这只是一个误操作,则不会激活体感交互系统。其中,这里的预定时间可以根据需要预先设定,比如设定为10秒、30秒等等。
作为一种更进一步的优选方案,当系统采集并识别到人体部位且持续未达到预定时间之前,可以在屏幕上显示一个进度条,以提示体感交互系统激活状态。进度条可以实时的,以图片形式显示系统激活的速度,完成度,剩余未完成任务量的大小,和可能需要处理时间。作为一种可能的实现,进度条可以长方形条状显示。当进度条满时,即表示达到激活体感交互系统条件,激活体感交互系统。通过这样的方式,能够让用户对体感交互系统激活状态的有直观的了解,也能够让误操作的用户及时停止手势动作以避免误触发激活体感交互系统。
S204:根据激活指令激活体感交互系统。
根据获取的激活指令,激活体感交互系统,以进入体感交互状态。
其中,在激活体感交互系统之后,可以给用户相应的体感交互系统已被激活的提示。其中,可以通过将屏幕预定区域以高亮状态进行显示作为提示。这里的预定区域可以是预设的体感感应区域对应到屏幕上的平面区域,比如屏幕左侧的一定面积的区域,或屏幕右侧一定面积的区域等。当然,也可以是整个屏幕。
当然,也可以通过其他的方式给用户提示,比如通过弹出体感交互系统已激活的提示,或者通过语音提示等等,本发明对此不作限定。
另外,在体感交互系统被激活后,在屏幕上显示与人体部位同步移动的图标。其中,与人体部位同步移动的图标可以是跟所述人体部位相似的图标,比如人体部位是人手,该图标可以是一个手形状的图标。当然,也可以是其他形式的图标,比如三角图标,圆点图标等。该屏幕上的图标跟随人体部位的移动而在屏幕上对应移动。比如人手在空间上向右移动,图标也跟随在屏幕上向右移动。
在体感交互系统激活状态下,通过3D传感器采集预定空间范围内人体部位的三维立体图像。3D传感器能够采集空间位置上物件的三维立体图像,所采集的三维立体图像包括物件所处的空间位置坐标以及空间运动轨迹。
本发明实施例所述的空间运动轨迹,包括人体部件的姿势以及人体部件的具体动作。比如用户在3D体感器前面做一个握拳的姿势并在空间范围内滑动,那么3D体感器采集该用户手部的三维立体图像,对该手部的三维立体图像进行特征提取,即获取到该手部距离3D传感器的三维坐标,以及该手部的握拳姿势和滑动的动作。其他三维立体图像的处理与此类似,本实施例不一一举例进行说明。
其中,本发明实施例所提到的人体部位可以是人手。当然也可以是其他的用于操作的人体部位比如人脸、人脚等等。
S102:对人体部位的三维立体图像进行处理,转化为操作指令;
其中,对人体部位的三维立体图像进行处理,转化为操作指令具体包括:将人体部位的三维立体图像进行特征提取获取特征参数,特征参数包括人体部位的三维坐标以及人体部位的空间运动轨迹,将特征参数与预存的特征参数进行匹配,当匹配度达到预定阈值时,获取与预存的特征参数对应的指令,以作为操作指令。
将所采集的人体部位的三维立体图像进行特征提取,以获取该采集的三维立体空间的特征参数,其中,这些特征参数包括该人体部位所处的空间三维坐标以及人体部位的空间运动轨迹。通过特征提取,能够识别人体部位距离3D传感器的具体空间位置以及人体部位的动作。比如人手做的一个抓握的动作,通过采集该抓握的立体图像,并通过特征提取,就能根据该特征提取的参数确定这个人手的具体空间位置并识别出该动作为一个抓握的动作。
作为一种可能的实现方式,本发明对动作的识别之前,包括一个学习训练以建立一个训练数据库的过程。比如为识别一个人手抓握的动作,系统会采集各种不同的抓握动作的三维立体图像,对这些不同的抓握动作进行学习,以获取用于识别这个具体动作的具体特征参数。针对每个不同的动作,系统都会做这么一个学习训练过程,各种不同具体动作对应的具体特征参数,构成训练数据库。当系统获取到一个三维立体图像时,就会将该立体图像进行特征提取,到训练数据库中找到与之匹配的具体动作,以作为识别结果。
系统数据库预先存储特征参数与执行对应操作的操作指令。其中,预存特征参数与执行对应操作的操作指令包括:
采集用于执行某一操作的三维立体图像,从该三维立体图像中提取特征参数,比如需要设置握拳的动作对应执行屏幕滚动的操作,则预先采集一个握拳的三维立体图像,提取该握拳动作的特征参数,特征参数包括该拳头距离传感器的空间三维坐标以及握拳这个动作的各个手指间空间相对运动轨迹,将这些参数与屏幕滚动的操作指令进行绑定存储。在体感交互系统激活状态下采集到三维立体图像并提取得到特征参数后,将提取的特征参数与预存的特征参数进行匹配。
当提取的特征参数与预存的特征参数匹配度达到预定阈值时,即获取与预存的特征参数对应的指令,以作为操作指令。比如采集的三维立体图像为一个握拳的动作,则获取与之对应的滚动屏幕指令。
S103:根据操作指令执行对应的操作。
根据获取的操作指令,执行与操作指令对应的操作。比如获取的是滚动屏幕指令,则控制进行屏幕滚动。
当然,还可以通过预设不同的感应区域,并设置不同感应区不同的动作对应不同的操作。比如设置屏幕左侧预定区域感应区手掌向上运动控制音量调大,手掌向下运动控制音量调小,屏幕右侧预定感应区握拳动作向上移动屏幕亮度调大,握拳动作向下移动控制屏幕亮度调小。当在屏幕左侧预定感应区检测到手掌并且手掌向上运动时将音量调大,手掌向下运动控制音量调小。当在屏幕右侧的预定感应区检测到握拳动作向上移动控制屏幕变亮,当检测到握拳动作向下移动控制屏幕变暗。当然,在这样的设置条件下,如果在屏幕左侧预定区域检测到握拳的动作,或在屏幕右侧预定区域检测到手掌时,不进行响应。
在不设置预定区域对应不同的操作时,可以只设置不同的动作对应不同的操作,整个屏幕都是感应区,只要感应到动作,并且与预设的动作匹配达到预定阈值,即执行与动作对应的操作。
这里所提到预定阈值,是用于衡量匹配程度的阈值。可以根据需要自行设置阈值。比如当对匹配程度要求不高时,可以设定阈值为50%,也就是说,只要采集的三维立体图像所提取的特征参数与预存的特征参数匹配度达到50%或以上,即执行预存特征参数对应的操作。如果要求匹配程度较高才能执行动作,则可以相应将阈值调高,比如设置阈值为90%,则只有匹配度达到90%或以上,才会执行对应的操作。
当停止交互达到预定时间时,体感交互系统锁定,只有通过再次激活才能进行体感交互以此可以防止无意识动作对交互系统进行误操作。这里的感应区域可以是预设的预定空间区域范围,也可以是3D传感器所能采集到信号的整个区域范围。比如可以预设屏幕左侧对应的预定空间范围为感应区域,只有在该感应区域内的动作才进行识别并响应,在该感应区域外的动作不做响应。在不设置感应区域的情况下,默认3D传感器所能采集到信号的整个区域范围为感应区域。
本发明实施例所提供的体感交互方法,能够用于控制屏幕滚动、音量调节、亮度调节以及屏幕滚动速度调节等,当然,也并不局限于此。比如也可以通过体感控制而实现打开应用程序、文档缩放等。即通过采集空间上的手点击动作或手抓握伸缩动作来打开应用或进行文档缩放等等。本发明不一一进行举例说明。
以上本发明实施例提供的体感交互方法,在激活体感交互系统的状态下,采集人体部位的三维立体图像,根据人体部位的三维立体图像进行特征提取获取特征参数,将特征参数与预存的特征参数进行匹配,当匹配度达到预定阈值时,执行与预存的特征参数对应的操作。通过这样的方式,不需要依赖于外部输入设备,通过感应人体部位的空间动作,就能执行相应的操作,给用户更好的使用体验。
请参阅图3,图3是本发明实施例提供的一种体感交互系统激活方法的流程图,本实施例的体感交互系统激活方法包括:
S301:采集三维立体图像;
通过3D传感器采集预定空间范围内的三维立体图像。所采集的三维立体图像包括3D传感器镜头监控范围的所有物体。比如3D传感器镜头前包括桌子、椅子以及人,那么所采集的三维立体图像包括所有的这些物件。
S302:对三维立体图像进行处理,判断三维立体图像是否包含用于激活体感交互系统的人体部位的三维立体图像;
对3D体感器采集的三维立体图形进行处理,判断该三维立体图像中是否包含用于激活体感交互系统的人体部位的三维立体图像。比如预设的用于激活体感交互系统的人体部位为人手,则从采集的三维立体图像中识别是否包括人手。如果三维立体图像中包括用于激活体感交互系统的人体部位的三维立体图像,则执行步骤S303。
S303:对人体部位的三维立体图像进行处理,转化为激活指令;
其中,对人体部位的三维立体图像进行处理,转化为激活指令具体包括:将人体部位的三维立体图像进行特征提取获取特征参数,特征参数包括人体部位的三维坐标以及人体部位的空间运动轨迹,将特征参数与预存的激活体感交互系统的特征参数进行匹配,当匹配度达到预定阈值时,获取与预存的特征参数对应的指令,以作为激活指令。
将采集的人体部位的三维立体图像进行特征提取,获取得到特征参数,特征参数包括位置参数以及动作轨迹参数。位置参数即人体部位所处的空间位置,用三维坐标表示,运动轨迹即人体部位在空间上的运动轨迹。比如一个手掌抓握的动作,其参数提取即包括手掌当前所处的实际三维坐标X、Y、Z的具体数值,以确定手掌与3D传感器的具体位置关系。还包括手掌的空间运动轨迹即抓握的动作轨迹。
在提取得到特征参数以后,将提取的特征参数与预存的用于激活体感交互系统的特征参数进行匹配。
比如通过一个手掌的动作来激活体感交互系统,预存的手掌的动作的特征参数包括A、B、C,当前采集到一个手掌的三维立体图像,提取得到的特征参数包括A'、B'、C',将A'、B'、C'与A、B、C进行匹配,并判断匹配度是否达到预定阈值。
当提取的特征参数与预存的用于激活体感交互系统的特征参数的匹配度达到预定阈值时,获取与预存的特征参数对应的指令,以作为激活指令。预定阈值是预先设置的匹配程度的值,比如可以设置预定阈值为80%,当匹配度达到80%或以上,即获取与预存的特征参数对应的指令,以作为激活指令。
作为一种优选的实现方案,在提取的特征参数与预存的用于激活体感交互系统的特征参数的匹配度达到预定阈值时,系统可以进一步判断所述三维立体图像中包含用于激活体感交互系统的人体部件的三维立体图像所持续的时间是否达到预定时间。在持续的时间达到预定时间时,才获取与预存的特征参数对应的指令,以作为激活指令。通过这样的方式,可以有效防止误触发激活体感交互系统。
比如预设的用于激活体感交行系统的人体部件为手掌。当3D体感器前面用户当前正与另一个人在聊天,过程中可能不经意的会做出一个手掌的动作,系统在采集并识别出这个手掌动作之后,进一步判断立体图像中包含该手掌是否持续预定时间,如果没有持续达到预定时间,可以判断到这只是一个误操作,则不会激活体感交互系统。其中,这里的预定时间可以根据需要预先设定,比如设定为10秒、30秒等等。
作为一种更进一步的优选方案,当系统采集并识别到人体部位且持续未达到预定时间之前,可以在屏幕上显示一个进度条,以提示体感交互系统激活状态。进度条可以实时的,以图片形式显示系统激活的速度,完成度,剩余未完成任务量的大小,和可能需要处理时间。作为一种可能的实现,进度条可以长方形条状显示。当进度条满时,即表示达到激活体感交互系统条件,激活体感交互系统。通过这样的方式,能够让用户对体感交互系统激活状态的有直观的了解,也能够让误操作的用户及时停止手势动作以避免误触发激活体感交互系统。
S304:根据激活指令激活体感交互系统。
根据获取的激活指令,激活体感交互系统,以进入体感交互状态。
其中,在激活体感交互系统之后,可以给用户相应的体感交互系统已被激活的提示。其中,可以通过将屏幕预定区域以高亮状态进行显示作为提示。这里的预定区域可以是预设的体感感应区域对应到屏幕上的平面区域,比如屏幕左侧的一定面积的区域,或屏幕右侧一定面积的区域等。当然,也可以是整个屏幕。
当然,也可以通过其他的方式给用户提示,比如通过弹出体感交互系统已激活的提示,或者通过语音提示等等,本发明对此不作限定。
另外,在体感交互系统被激活后,在屏幕上显示与人体部位同步移动的图标。其中,与人体部位同步移动的图标可以是跟所述人体部位相似的图标,比如人体部位是人手,该图标可以是一个手形状的图标。当然,也可以是其他形式的图标,比如三角图标,圆点图标等。该屏幕上的图标跟随人体部位的移动而在屏幕上对应移动。比如人手在空间上向右移动,图标也跟随在屏幕上向右移动。
本实施例的体感交互系统激活方法,能够通过采集预定空间范围内的三维立体图像,并识别出用于激活体感交互系统的人体部位,以获取激活指令而激活体感交互系统。激活方式灵活方便,给用户很好的激活体验。并且,在激活过程中,能够通过进度条以及持续时间的判断,有效防止误操作,也能让用户对激活进度有直观的了解。
请参阅图4,图4是本发明实施例提供的一种体感交互系统的结构示意图,本实施例的体感交互系统用于执行上述图1所示实施例的体感交互的方法,本实施例的体感交互系统100包括采集模块11、转化模块12以及处理模块13,其中:
采集模块11用于在体感交互系统激活状态下,采集人体部位的三维立体图像;
在本发明实施例中,需要进行体感交互之前,需要先激活体感交互系统。
因此,本实施例的体感交互系统进一步包括一激活模块14,激活模块14用于激活体感交互系统。
其中,请进一步参阅图5,图5是本发明实施例提供的激活模块的结构示意图,如图所示,激活模块14包括采集单元141、判断单元142、转化单元143以及激活单元144,其中:
采集单元141用于采集三维立体图像;
采集单元141通过3D传感器采集预定空间范围内的三维立体图像。所采集的三维立体图像包括3D传感器镜头监控范围的所有物体。比如3D传感器镜头前包括桌子、椅子以及人,那么所采集的三维立体图像包括所有的这些物件。
判断单元142用于对三维立体图像进行处理,判断三维立体图像是否包含用于激活体感交互系统的人体部位的三维立体图像,并将判断结果输出给转化单元143。
判断单元142对3D体感器采集的三维立体图形进行处理,判断该三维立体图像中是否包含用于激活体感交互系统的人体部位的三维立体图像。比如预设的用于激活体感交互系统的人体部位为人手,则从采集的三维立体图像中识别是否包括人手。
转化单元143用于对人体部位的三维立体图像进行处理,转化为激活指令。
其中,对人体部位的三维立体图像进行处理,转化为激活指令具体包括:将人体部位的三维立体图像进行特征提取获取特征参数,特征参数包括人体部位的三维坐标以及人体部位的空间运动轨迹,将特征参数与预存的激活体感交互系统的特征参数进行匹配,当匹配度达到预定阈值时,获取与预存的特征参数对应的指令,以作为激活指令。
将采集的人体部位的三维立体图像进行特征提取,获取得到特征参数,特征参数包括位置参数以及动作轨迹参数。位置参数即人体部位所处的空间位置,用三维坐标表示,运动轨迹即人体部位在空间上的运动轨迹。比如一个手掌抓握的动作,其参数提取即包括手掌当前所处的实际三维坐标X、Y、Z的具体数值,以确定手掌与3D传感器的具体位置关系。还包括手掌的空间运动轨迹即抓握的动作轨迹。
在提取得到特征参数以后,提取的特征参数与预存的用于激活体感交互系统的特征参数进行匹配。
比如通过一个手掌的动作来激活体感交互系统,预存的手掌的动作的特征参数包括A、B、C,当前采集到一个手掌的三维立体图像,提取得到的特征参数包括A'、B'、C',将A'、B'、C'与A、B、C进行匹配,并判断匹配度是否达到预定阈值。
当提取的特征参数与预存的用于激活体感交互系统的特征参数的匹配度达到预定阈值时,获取与预存的特征参数对应的指令,以作为激活指令。预定阈值是预先设置的匹配程度的值,比如可以设置预定阈值为80%,当匹配度达到80%或以上,即获取与预存的特征参数对应的指令,以作为激活指令。
作为一种优选的实现方案,在提取的特征参数与预存的用于激活体感交互系统的特征参数的匹配度达到预定阈值时,系统可以进一步判断所述三维立体图像中包含用于激活体感交互系统的人体部件的三维立体图像所持续的时间是否达到预定时间。在持续的时间达到预定时间时,才获取与预存的特征参数对应的指令,以作为激活指令。通过这样的方式,可以有效防止误触发激活体感交互系统。
比如预设的用于激活体感交行系统的人体部件为手掌。当3D体感器前面用户当前正与另一个人在聊天,过程中可能不经意的会做出一个手掌的动作,系统在采集并识别出这个手掌动作之后,进一步判断立体图像中包含该手掌是否持续预定时间,如果没有持续达到预定时间,可以判断到这只是一个误操作,则不会激活体感交互系统。其中,这里的预定时间可以根据需要预先设定,比如设定为10秒、30秒等等。
作为一种更进一步的优选方案,当系统采集并识别到人体部位且持续未达到预定时间之前,可以在屏幕上显示一个进度条,以提示体感交互系统激活状态。进度条可以实时的,以图片形式显示系统激活的速度,完成度,剩余未完成任务量的大小,和可能需要处理时间。作为一种可能的实现,进度条可以长方形条状显示。当进度条满时,即表示达到激活体感交互系统条件,激活体感交互系统。通过这样的方式,能够让用户对体感交互系统激活状态的有直观的了解,也能够让误操作的用户及时停止手势动作以避免误触发激活体感交互系统。
激活单元144用于根据激活指令激活体感交互系统。
根据获取的激活指令,激活体感交互系统,以进入体感交互状态。
采集模块11在体感交互系统激活状态下,通过3D传感器采集预定空间范围内人体部位的三维立体图像。3D传感器能够采集空间位置上物件的三维立体图像,所采集的三维立体图像包括物件所处的空间位置坐标以及空间运动轨迹。
本发明实施例所述的空间运动轨迹,包括人体部件的姿势以及人体部件的具体动作。比如用户在3D体感器前面做一个握拳的姿势并在空间范围内滑动,那么3D体感器采集该用户手部的三维立体图像,对该手部的三维立体图像进行特征提取,即获取到该手部距离3D传感器的三维坐标,以及该手部的握拳姿势和滑动的动作。其他三维立体图像的处理与此类似,本实施例不一一举例进行说明。
其中,本发明实施例所提到的人体部位可以是人手。当然也可以是其他的用于操作的人体部位比如人脸、人脚等等。
转化模块12用于对人体部位的三维立体图像进行处理,转化为操作指令。
其中,请进一步参阅图6,图6是本发明实施例提供的转化模块12的结构示意图,如图所示,转化模块12还包括特征提取单元121、匹配单元122以及获取单元123,其中:
特征提取单元121用于将人体部位的三维立体图像进行特征提取获取特征参数,特征参数包括人体部位的三维坐标以及人体部位的空间运动轨迹。
特征提取单元121将所采集的人体部位的三维立体图像进行特征提取,以获取该采集的三维立体空间的特征参数,其中,这些特征参数包括该人体部位所处的空间三维坐标以及人体部位的空间运动轨迹。通过特征提取,能够识别人体部位的动作。比如人手做的一个抓握的动作,通过采集该抓握的立体图像,并通过特征提取,就能根据该特征提取的参数识别该动作为一个抓握的动作。
作为一种可能的实现方式,本发明对动作的识别之前,包括一个学习训练以建立一个训练数据库的过程。比如为识别一个人手抓握的动作,系统会采集各种不同的抓握动作的三维立体图像,对这些不同的抓握动作进行学习,以获取用于识别这个具体动作的具体特征参数。针对每个不同的动作,系统都会做这么一个学习训练过程,各种不同具体动作对应的具体特征参数,构成训练数据库。当系统获取到一个三维立体图像时,就会将该立体图像进行特征提取,到训练数据库中找到与之匹配的具体动作,以作为识别结果。
匹配单元122用于将特征参数与预存的特征参数进行匹配;
系统数据库预先存储特征参数与执行对应操作的操作指令。其中,预存特征参数与执行对应操作的操作指令包括:
采集用于执行某一操作的三维立体图像,从该三维立体图像中提取特征参数,比如需要设置握拳的动作对应执行屏幕滚动的操作,则预先采集一个握拳的三维立体图像,提取该握拳动作的特征参数,特征参数包括该拳头距离传感器的空间三维坐标以及握拳这个动作的各个手指间空间相对运动轨迹,将这些参数与屏幕滚动的操作指令进行绑定存储。在体感交互系统激活状态下采集到三维立体图像并提取得到特征参数后,将提取的特征参数与预存的特征参数进行匹配。
获取单元123用于,当匹配度达到预定阈值时,获取与预存的特征参数对应的指令,以作为操作指令。
当提取的特征参数与预存的特征参数匹配度达到预定阈值时,获取单元124即获取与预存的特征参数对应的指令,以作为操作指令。比如采集的三维立体图像为一个握拳的动作,则获取与之对应的滚动屏幕指令。
处理模块13用于根据操作指令执行对应的操作。
根据获取的操作指令,处理模块13控制执行与操作指令对应的操作。比如获取的是滚动屏幕指令,则控制进行屏幕滚动。
当然,还可以通过预设不同的感应区域,并设置不同感应区不同的动作对应不同的操作。比如设置屏幕左侧预定区域感应区手掌向上运动控制音量调大,手掌向下运动控制音量调小,屏幕右侧预定感应区握拳动作向上移动屏幕亮度调大,握拳动作向下移动控制屏幕亮度调小。当在屏幕左侧预定感应区检测到手掌并且手掌向上运动时将音量调大,手掌向下运动控制音量调小。当在屏幕右侧的预定感应区检测到握拳动作向上移动控制屏幕变亮,当检测到握拳动作向下移动控制屏幕变暗。当然,在这样的设置条件下,如果在屏幕左侧预定区域检测到握拳的动作,或在屏幕右侧预定区域检测到手掌时,不进行响应。
在不设置预定区域对应不同的操作时,可以只设置不同的动作对应不同的操作,3D体感器所能采集到立体图像的区域范围都是感应区,只要感应到动作,并且与预设的动作匹配达到预定阈值,即执行与动作对应的操作。
这里所提到预定阈值,是用于衡量匹配程度的阈值。可以根据需要自行设置阈值。比如当对匹配程度要求不高时,可以设定阈值为50%,也就是说,只要采集的三维立体图像所提取的特征参数与预存的特征参数匹配度达到50%或以上,即执行预存特征参数对应的操作。如果要求匹配程度较高才能执行动作,则可以相应将阈值调高,比如设置阈值为90%,则只有匹配度达到90%或以上,才会执行对应的操作。
当停止交互达到预定时间时,体感交互系统锁定,只有通过再次激活才能进行体感交互,以此可以防止无意识动作对交互系统进行误操作。这里的感应区域可以是预设的预定空间区域范围,也可以是3D传感器所能采集到信号的整个区域范围。比如可以预设屏幕左侧对应的预定空间范围为感应区域,只有在该感应区域内的动作才进行识别并响应,在该感应区域外的动作不做响应。在不设置感应区域的情况下,默认3D传感器所能采集到信号的整个区域范围为感应区域。
请继续参阅图4,本实施例的体感交互系统还包括显示模块15,显示模块15用于在体感交互系统被激活后,在屏幕上显示与人体部位同步移动的图标。
其中,与人体部位同步移动的图标可以是跟人体部位相似的图标,比如人体部位是人手,该图标可以是一个手形状的图标。当然,也可以是其他形式的图标,比如三角图标,圆点图标等。该屏幕上的图标跟随人体部位的移动而在屏幕上对应移动。比如人手在空间上向右移动,图标也跟随在屏幕上向右移动。
请继续参阅图4,本实施例的体感交互系统还包括提示模块16,提示模块16用于提示用户体感交互系统被激活。
其中,在激活体感交互系统之后,可以给用户相应的体感交互系统已被激活的提示。其中,可以通过将屏幕预定区域以高亮状态进行显示作为提示。这里的预定区域可以是预设的体感感应区域,比如屏幕左侧的一定面积的区域,或屏幕右侧一定面积的区域等。当然,也可以是整个屏幕。
当然,也可以通过其他的方式给用户提示,比如通过弹出体感交互系统已激活的提示,或者通过语音提示等等,本发明对此不作限定。
本发明实施例所提供的体感交互方法,能够用于控制屏幕滚动、音量调节、亮度调节以及屏幕滚动速度调节等,当然,也并不局限于此。比如也可以通过体感控制而实现打开应用程序、文档缩放等。即通过采集空间上的手点击动作或手抓握伸缩动作来打开应用或进行文档缩放等等。本发明不一一进行举例说明。
请参阅图7,图7是本发明实施例提供的另一种体感交互系统的结构示意图,本实施例的体感交互系统用于执行上述图3所示实施例的体感交互系统的激活方法,如图所示,本实施例的体感交互系统200包括采集模块21、判断模块22、转化模块23以及激活模块24,其中:
采集模块21用于采集三维立体图像;
采集模块21通过3D传感器采集预定空间范围内的三维立体图像。所采集的三维立体图像包括3D传感器镜头监控范围的所有物体。比如3D传感器镜头前包括桌子、椅子以及人,那么所采集的三维立体图像包括所有的这些物件。
判断模块22用于对三维立体图像进行处理,判断三维立体图像是否包含用于激活体感交互系统的人体部位立体图像,并将判断结果输出给转化模块23;
判断模块22对3D体感器采集的三维立体图形进行处理,判断该三维立体图像中是否包含用于激活体感交互系统的人体部位的三维立体图像。比如预设的用于激活体感交互系统的人体部位为人手,则从采集的三维立体图像中识别是否包括人手。
转化模块23用于,当三维立体图像包含用于激活体感交互系统的人体部位立体图像时,对人体部位的三维立体图像进行处理,转化为激活指令;
其中,转化模块23对人体部位的三维立体图像进行处理,转化为激活指令具体包括:将人体部位的三维立体图像进行特征提取获取特征参数,特征参数包括人体部位的三维坐标以及人体部位的空间运动轨迹,将特征参数与预存的激活体感交互系统的特征参数进行匹配,当匹配度达到预定阈值时,获取与预存的特征参数对应的指令,以作为激活指令。
将采集的人体部位的三维立体图像进行特征提取,获取得到特征参数,特征参数包括位置参数以及动作轨迹参数。位置参数即人体部位所处的空间位置,用三维坐标表示,运动轨迹即人体部位在空间上的运动轨迹。比如一个手掌抓握的动作,其参数提取即包括手掌当前所处的实际三维坐标X、Y、Z的具体数值,以确定手掌与3D传感器的具体位置关系。还包括手掌的空间运动轨迹即抓握的动作轨迹。
在提取得到特征参数以后,将提取的特征参数与预存的用于激活体感交互系统的特征参数进行匹配。
比如通过一个手掌的动作来激活体感交互系统,预存的手掌的动作的特征参数包括A、B、C,当前采集到一个手掌的三维立体图像,提取得到的特征参数包括A'、B'、C',将A'、B'、C'与A、B、C进行匹配,并判断匹配度是否达到预定阈值。
当提取的特征参数与预存的用于激活体感交互系统的特征参数的匹配度达到预定阈值时,获取与预存的特征参数对应的指令,以作为激活指令。预定阈值是预先设置的匹配程度的值,比如可以设置预定阈值为80%,当匹配度达到80%或以上,即获取与预存的特征参数对应的指令,以作为激活指令。
作为一种优选的实现方案,在提取的特征参数与预存的用于激活体感交互系统的特征参数的匹配度达到预定阈值时,系统可以进一步判断所述三维立体图像中包含用于激活体感交互系统的人体部件的三维立体图像所持续的时间是否达到预定时间。在持续的时间达到预定时间时,才获取与预存的特征参数对应的指令,以作为激活指令。通过这样的方式,可以有效防止误触发激活体感交互系统。
比如预设的用于激活体感交行系统的人体部件为手掌。当3D体感器前面用户当前正与另一个人在聊天,过程中可能不经意的会做出一个手掌的动作,系统在采集并识别出这个手掌动作之后,进一步判断立体图像中包含该手掌是否持续预定时间,如果没有持续达到预定时间,可以判断到这只是一个误操作,则不会激活体感交互系统。其中,这里的预定时间可以根据需要预先设定,比如设定为10秒、30秒等等。
作为一种更进一步的优选方案,当系统采集并识别到人体部位且持续未达到预定时间之前,可以在屏幕上显示一个进度条,以提示体感交互系统激活状态。进度条可以实时的,以图片形式显示系统激活的速度,完成度,剩余未完成任务量的大小,和可能需要处理时间。作为一种可能的实现,进度条可以长方形条状显示。当进度条满时,即表示达到激活体感交互系统条件,激活体感交互系统。通过这样的方式,能够让用户对体感交互系统激活状态的有直观的了解,也能够让误操作的用户及时停止手势动作以避免误触发激活体感交互系统。
激活模块24根据激活指令激活体感交互系统。
激活模块24根据获取的激活指令,激活体感交互系统,以进入体感交互状态。
其中,请继续参阅图7,本实施例的体感交互系统可以进一步包括提示模块25,提示模块25用于在体感交互系统被激活之后,提示用户所述体感交互系统被激活。
其中,在激活体感交互系统之后,可以给用户相应的体感交互系统已被激活的提示。其中,可以通过将屏幕预定区域以高亮状态进行显示作为提示。这里的预定区域可以是预设的体感感应区域对应到屏幕上的平面区域,比如屏幕左侧的一定面积的区域,或屏幕右侧一定面积的区域等。当然,也可以是整个屏幕。
当然,也可以通过其他的方式给用户提示,比如通过弹出体感交互系统已激活的提示,或者通过语音提示等等,本发明对此不作限定。
请继续参阅图7,本实施例的体感交互系统还可以进一步包括显示模块26,显示模块26用于在体感交互系统被激活后,在屏幕上显示与人体部位同步移动的图标。
其中,与人体部位同步移动的图标可以是跟所述人体部位相似的图标,比如人体部位是人手,该图标可以是一个手形状的图标。当然,也可以是其他形式的图标,比如三角图标,圆点图标等。该屏幕上的图标跟随人体部位的移动而在屏幕上对应移动。比如人手在空间上向右移动,图标也跟随在屏幕上向右移动。
本实施例提供的体感交互系统,能够通过采集预定空间范围内的三维立体图像,并识别出用于激活体感交互系统的人体部位,以获取激活指令而激活体感交互系统。激活方式灵活方便,给用户很好的激活体验。并且,在激活过程中,能够通过进度条以及持续时间的判断,有效防止误操作,也能让用户对激活进度有直观的了解。
在本发明实施例所提供的体感交互系统的基础上,本发明实施例进一步提供一种电子设备,该电子设备包括上述实施例所述的体感交互系统。其中,电子设备可以但不限于是智能电视、智能手机、平板电脑、笔记本电脑等。
上述本发明实施例提供的体感交互的方法及系统,通过采集空间范围内的三维立体图像,并进行图像特征提取和匹配从而响应预定的操作。使得人机交互不再依赖于特定的输入设备,不需要用户与输入输出设备接触也能实现对智能设备的控制,从而使人机交互变得更加真实、更加方便,给用户更好的使用体验。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统,装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)或处理器(processor)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述仅为本申请的实施例,并非因此限制本申请的专利范围,凡是利用本申请说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本申请的专利保护范围内。

Claims (14)

  1. 一种体感交互方法,其特征在于,所述方法包括:
    在体感交互系统激活状态下,采集人体部位的三维立体图像;
    对所述人体部位的三维立体图像进行处理,转化为操作指令;
    根据所述操作指令执行对应的操作。
  2. 根据权利要求1所述的方法,其特征在于,所述对所述人体部位的三维立体图像进行处理,转化为操作指令包括:
    将所述人体部位的三维立体图像进行特征提取获取特征参数,所述特征参数包括所述人体部位的三维坐标以及所述人体部位的空间运动轨迹;
    将所述特征参数与预存的特征参数进行匹配;
    当匹配度达到预定阈值时,获取与所述预存的特征参数对应的指令,以作为所述操作指令。
  3. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    激活所述体感交互系统。
  4. 根据权利要求3所述的方法,其特征在于,所述激活所述体感交互系统包括:
    采集三维立体图像;
    对所述三维立体图像进行处理,判断所述三维立体图像是否包含用于激活体感交互系统的人体部位的三维立体图像;
    当所述三维立体图像包含用于激活体感交互系统的人体部位立体图像时,对所述人体部位的三维立体图像进行处理,转化为激活指令;
    根据所述激活指令激活所述体感交互系统。
  5. 根据权利要求4所述的方法,其特征在于,所述对所述人体部位的三维立体图像进行处理,转化为激活指令包括:
    将所述人体部位的三维立体图像进行特征提取获取特征参数,所述特征参数包括所述人体部位的三维坐标以及所述人体部位的空间运动轨迹;
    将所述特征参数与预存的激活所述体感交互系统的特征参数进行匹配;
    当匹配度达到预定阈值时,获取与所述预存的特征参数对应的指令,以作为所述激活指令。
  6. 根据权利要求3所述的方法,其特征在于,所述激活所述体感交互系统之后还包括:
    在屏幕上显示与所述人体部位同步移动的图标。
  7. 根据权利要求3所述的方法,其特征在于,所述激活所述体感交互系统之后,还包括:
    提示用户所述体感交互系统被激活。
  8. 一种体感交互系统激活方法,其特征在于,所述方法包括:
    采集三维立体图像;
    对所述三维立体图像进行处理,判断所述三维立体图像是否包含用于激活体感交互系统的人体部位立体图像;
    当所述三维立体图像包含用于激活体感交互系统的人体部位立体图像时,对所述人体部位的三维立体图像进行处理,转化为激活指令;
    根据所述激活指令激活所述体感交互系统。
  9. 一种体感交互系统,其特征在于,所述系统包括采集模块、转化模块以及处理模块,其中:
    所述采集模块用于在体感交互系统激活状态下,采集人体部位的三维立体图像;
    所述转化模块用于对所述人体部位的三维立体图形进行处理,转化为操作指令;
    所述处理模块用于根据所述操作指令执行对应的操作。
  10. 根据权利要求9所述的系统,其特征在于,所述转化模块包括特征提取单元、匹配单元以及获取单元,其中:
    所述特征提取单元用于将所述人体部位的三维立体图像进行特征提取获取特征参数,所述特征参数包括所述人体部位的三维坐标以及所述人体部位的空间运动轨迹;
    所述匹配单元用于将所述特征参数与预存的特征参数进行匹配;
    所述获取单元用于,当匹配度达到预定阈值时,获取与所述预存的特征参数对应的指令,以作为所述操作指令。
  11. 根据权利要求9所述的系统,其特征在于,所述系统还包括激活模块,所述激活模块用于激活所述体感交互系统。
  12. 根据权利要求11所述的系统,其特征在于,所述激活模块包括采集单元、判断单元、转化单元以及激活单元,其中:
    采集单元用于采集三维立体图像;
    判断单元用于对所述三维立体图像进行处理,判断所述三维立体图像是否包含用于激活体感交互系统的人体部位立体图像;
    所述转化单元用于,当所述三维立体图像包含用于激活体感交互系统的人体部位立体图像时,对所述人体部位的三维立体图像进行处理,转化为激活指令;
    所述激活单元用于根据所述激活指令激活所述体感交互系统。
  13. 根据权利要求11所述的系统,其特征在于,所述系统还包括显示模块,所述显示模块用于在屏幕上显示与所述人体部位同步移动的图标。
  14. 根据权利要求11所述的系统,其特征在于,所述系统还包括提示模块,所述提示模块用于提示用户所述体感交互系统被激活。
PCT/CN2016/076765 2015-06-05 2016-03-18 一种体感交互系统激活方法、体感交互方法及系统 WO2016192438A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201510307196.3A CN104881122B (zh) 2015-05-29 2015-06-05 一种体感交互系统激活方法、体感交互方法及系统
CN201510307196.3 2015-06-05

Publications (1)

Publication Number Publication Date
WO2016192438A1 true WO2016192438A1 (zh) 2016-12-08

Family

ID=60022898

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/076765 WO2016192438A1 (zh) 2015-06-05 2016-03-18 一种体感交互系统激活方法、体感交互方法及系统

Country Status (2)

Country Link
CN (1) CN104881122B (zh)
WO (1) WO2016192438A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111282261A (zh) * 2020-01-22 2020-06-16 京东方科技集团股份有限公司 人机交互方法及装置、体感游戏设备

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104881122B (zh) * 2015-05-29 2018-10-09 深圳奥比中光科技有限公司 一种体感交互系统激活方法、体感交互方法及系统
WO2016192440A1 (zh) * 2015-06-05 2016-12-08 深圳奥比中光科技有限公司 一种体感控制参数调整的方法、体感交互系统及电子设备
CN106933342A (zh) * 2015-12-31 2017-07-07 北京数码视讯科技股份有限公司 体感系统、体感控制设备以及智能电子设备
CN107450717B (zh) * 2016-05-31 2021-05-18 联想(北京)有限公司 一种信息处理方法及穿戴式设备
CN106933352A (zh) * 2017-02-14 2017-07-07 深圳奥比中光科技有限公司 三维人体测量方法和其设备及其计算机可读存储介质
CN107920203A (zh) * 2017-11-23 2018-04-17 乐蜜有限公司 图像采集方法、装置和电子设备
CN108153421B (zh) * 2017-12-25 2021-10-01 深圳Tcl新技术有限公司 体感交互方法、装置及计算机可读存储介质
CN110505405A (zh) * 2019-08-22 2019-11-26 上海乂学教育科技有限公司 基于体感技术的视频拍摄系统及方法
CN113849065A (zh) * 2021-09-17 2021-12-28 支付宝(杭州)信息技术有限公司 一种利用健身动作触发客户端操作指令的方法及装置

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102043320A (zh) * 2009-10-21 2011-05-04 陕西金合泰克信息科技发展有限公司 空中红外翻页影像书及其红外翻页方法
CN103246351A (zh) * 2013-05-23 2013-08-14 刘广松 一种用户交互系统和方法
CN104881122A (zh) * 2015-05-29 2015-09-02 深圳奥比中光科技有限公司 一种体感交互系统激活方法、体感交互方法及系统
CN104915004A (zh) * 2015-05-29 2015-09-16 深圳奥比中光科技有限公司 一种体感控制屏幕滚动方法、体感交互系统及电子设备
CN104915003A (zh) * 2015-05-29 2015-09-16 深圳奥比中光科技有限公司 一种体感控制参数调整的方法、体感交互系统及电子设备

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103777748A (zh) * 2012-10-26 2014-05-07 华为技术有限公司 一种体感输入方法及装置
CN203950270U (zh) * 2014-01-22 2014-11-19 南京信息工程大学 体感识别装置及通过其控制鼠标键盘操作的人机交互系统
CN104182132B (zh) * 2014-08-07 2017-11-14 天津三星电子有限公司 一种智能终端手势控制方法及智能终端

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102043320A (zh) * 2009-10-21 2011-05-04 陕西金合泰克信息科技发展有限公司 空中红外翻页影像书及其红外翻页方法
CN103246351A (zh) * 2013-05-23 2013-08-14 刘广松 一种用户交互系统和方法
CN104881122A (zh) * 2015-05-29 2015-09-02 深圳奥比中光科技有限公司 一种体感交互系统激活方法、体感交互方法及系统
CN104915004A (zh) * 2015-05-29 2015-09-16 深圳奥比中光科技有限公司 一种体感控制屏幕滚动方法、体感交互系统及电子设备
CN104915003A (zh) * 2015-05-29 2015-09-16 深圳奥比中光科技有限公司 一种体感控制参数调整的方法、体感交互系统及电子设备

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111282261A (zh) * 2020-01-22 2020-06-16 京东方科技集团股份有限公司 人机交互方法及装置、体感游戏设备
CN111282261B (zh) * 2020-01-22 2023-08-08 京东方科技集团股份有限公司 人机交互方法及装置、体感游戏设备

Also Published As

Publication number Publication date
CN104881122A (zh) 2015-09-02
CN104881122B (zh) 2018-10-09

Similar Documents

Publication Publication Date Title
WO2016192438A1 (zh) 一种体感交互系统激活方法、体感交互方法及系统
WO2017039308A1 (en) Virtual reality display apparatus and display method thereof
WO2017119664A1 (en) Display apparatus and control methods thereof
WO2018009029A1 (en) Electronic device and operating method thereof
EP3281058A1 (en) Virtual reality display apparatus and display method thereof
WO2014029170A1 (zh) 电容和电磁双模触摸屏的触控方法及手持式电子设备
WO2014112777A1 (en) Method for providing haptic effect in portable terminal, machine-readable storage medium, and portable terminal
WO2011025239A2 (ko) 모션을 이용한 ui 제공방법 및 이를 적용한 디바이스
WO2012070682A1 (ja) 入力装置及び入力装置の制御方法
WO2016182181A1 (ko) 웨어러블 디바이스 및 웨어러블 디바이스의 피드백 제공 방법
WO2017119745A1 (en) Electronic device and control method thereof
WO2020159302A1 (ko) 증강 현실 환경에서 다양한 기능을 수행하는 전자 장치 및 그 동작 방법
WO2014137176A1 (en) Input apparatus, display apparatus, and control methods thereof
WO2017126741A1 (ko) Hmd 디바이스 및 그 제어 방법
WO2017067291A1 (zh) 一种指纹识别的方法、装置及终端
WO2017099314A1 (ko) 사용자 정보를 제공하는 전자 장치 및 방법
WO2016060461A1 (en) Wearable device
WO2020076055A1 (en) Electronic device including pen input device and method of operating the same
WO2018143509A1 (ko) 이동 로봇 및 그 제어방법
WO2018124823A1 (en) Display apparatus and controlling method thereof
WO2020138602A1 (ko) 진정 사용자의 손을 식별하는 방법 및 이를 위한 웨어러블 기기
WO2017032061A1 (zh) 一种应用程序启动方法、智能手表及存储介质
WO2015100911A1 (zh) 医疗设备及其数字图像的分辨率调整方法、装置
EP3087752A1 (en) User terminal apparatus, electronic apparatus, system, and control method thereof
WO2017173841A1 (zh) 一种基于触摸屏的电子书自动滚屏控制方法及移动终端

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16802361

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16802361

Country of ref document: EP

Kind code of ref document: A1