WO2016192439A1 - 一种体感控制屏幕滚动方法、体感交互系统及电子设备 - Google Patents

一种体感控制屏幕滚动方法、体感交互系统及电子设备 Download PDF

Info

Publication number
WO2016192439A1
WO2016192439A1 PCT/CN2016/076776 CN2016076776W WO2016192439A1 WO 2016192439 A1 WO2016192439 A1 WO 2016192439A1 CN 2016076776 W CN2016076776 W CN 2016076776W WO 2016192439 A1 WO2016192439 A1 WO 2016192439A1
Authority
WO
WIPO (PCT)
Prior art keywords
body part
human body
screen
sensing area
predetermined
Prior art date
Application number
PCT/CN2016/076776
Other languages
English (en)
French (fr)
Inventor
黄源浩
肖振中
钟亮洪
许宏淮
林靖雄
Original Assignee
深圳奥比中光科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN201510307214.8A external-priority patent/CN104915004A/zh
Application filed by 深圳奥比中光科技有限公司 filed Critical 深圳奥比中光科技有限公司
Publication of WO2016192439A1 publication Critical patent/WO2016192439A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer

Definitions

  • the invention relates to the field of human-computer interaction, and particularly relates to a somatosensory control screen scrolling method, a somatosensory interaction system and an electronic device.
  • Human-computer interaction technology refers to the technology of realizing human-machine dialogue in an efficient manner through input and output devices.
  • the existing interaction mode of human-computer interaction usually interacts with the machine system through external devices such as a mouse, a keyboard, a touch screen or a handle, and the machine system responds accordingly.
  • the screen scrolling is realized by a mouse drag, or the screen scrolling is performed by sliding a human body part such as a finger on the touch screen.
  • the existing method of controlling the scrolling of the screen must directly contact the input device such as the mouse and the touch screen to complete the screen scrolling operation, so that the operation of the user to control the scrolling of the screen must depend on the external device, and the behavior of the user is restrained, and the specific implementation does not appear. Natural, not true.
  • the technical problem to be solved by the present invention is to provide a somatosensory control screen scrolling method, a somatosensory interaction system, and an electronic device, which can control the screen to scroll by sensing the action of the human body part without relying on an external input device.
  • a technical solution adopted by the present invention is to provide a method for controlling a body feeling screen scroll, the method comprising: collecting a three-dimensional stereoscopic image of a human body part in an activated state of the somatosensory interaction system; The three-dimensional image is subjected to feature extraction to acquire a feature parameter, the feature parameter includes a three-dimensional coordinate of the human body part and a motion track of the human body part; and determining whether the human body part enters a predetermined induction according to the three-dimensional coordinate of the human body part a region; when the body part enters the predetermined sensing area, the control screen scrolls according to a predetermined rule according to the motion trajectory of the body part.
  • the predetermined sensing area is divided into a left side sensing area, a right side sensing area, an upper side sensing area, and a lower side sensing area of the screen at a predetermined distance from the center point of the screen, wherein the predetermined distance is greater than 0;
  • the scrolling of the screen according to the motion trajectory of the human body part according to a predetermined rule includes: controlling the screen to scroll in the direction of the predetermined sensing area in a continuous or discrete manner according to the motion trajectory of the human body part.
  • the method further comprises: when the human body part enters the predetermined sensing area, displaying an icon moving synchronously with the human body part on the screen.
  • the method further comprises: when the human body part enters the predetermined sensing area, the screen displays a corresponding prompt.
  • the human body part is a human hand.
  • the system includes an acquisition module, a feature extraction module, a judgment module, and a control module, wherein: the acquisition module is used for the sense of body Acquiring a three-dimensional image of a human body part in an active state; the feature extraction module is configured to perform feature extraction on the three-dimensional image of the human body part to obtain a feature parameter, where the feature parameter includes a three-dimensional coordinate of the human body part and a motion track of the human body part; the determining module is configured to determine, according to the three-dimensional coordinates of the human body part, whether the human body part enters a predetermined sensing area; and the control module is configured to: when the human body part enters a predetermined sensing area The control screen scrolls according to a predetermined rule according to the motion trajectory of the human body part.
  • the predetermined sensing area is divided into a left side sensing area, a right side sensing area, an upper side sensing area, and a lower side sensing area of the screen at a predetermined distance from the center point of the screen, wherein the predetermined distance is greater than 0;
  • the module is configured to control the screen to scroll in the continuous or discrete manner toward the predetermined sensing area according to the motion trajectory of the human body part.
  • the system further includes a display module, wherein the display module is configured to display a corresponding prompt on the screen when the human body part enters the predetermined sensing area.
  • the display module is further configured to display the predetermined sensing area to a planar area on the screen in a highlighted manner when the human body part enters the predetermined sensing area.
  • another technical solution provided by the present invention is to provide an electronic device, where the electronic device includes a somatosensory interaction system, and the somatosensory interaction system includes an acquisition module, a feature extraction module, a judgment module, and a control module.
  • the acquiring module is configured to collect a three-dimensional stereoscopic image of a human body part in an activated state of the somatosensory interaction system; the feature extraction module is configured to perform feature extraction on the three-dimensional stereoscopic image of the human body part to acquire a feature parameter,
  • the characteristic parameter includes a three-dimensional coordinate of the human body part and a motion trajectory of the human body part;
  • the determining module is configured to determine whether the human body part enters a predetermined sensing area according to the three-dimensional coordinates of the human body part; When the body part enters the predetermined sensing area, the control screen scrolls according to a predetermined rule according to the motion track of the body part.
  • the predetermined sensing area is divided into a left side sensing area, a right side sensing area, an upper side sensing area, and a lower side sensing area of the screen at a predetermined distance from the center point of the screen, wherein the predetermined distance is greater than 0;
  • the module is configured to control the screen to scroll in the continuous or discrete manner toward the predetermined sensing area according to the motion trajectory of the human body part.
  • the system further includes a display module, and the display module is configured to display an icon moving synchronously with the human body part on the screen when the human body part enters the predetermined sensing area.
  • the display module is further configured to display a corresponding prompt on the screen when the human body part enters the predetermined sensing area.
  • the invention has the beneficial effects that the three-dimensional stereoscopic image of the human body part is collected in the state of activating the somatosensory interaction system, and the feature extraction is performed according to the three-dimensional stereoscopic image of the human body part to obtain the characteristic parameter, wherein the feature is different from the prior art.
  • the parameters include the three-dimensional coordinates of the human body part and the motion track of the human body part. According to the three-dimensional coordinates of the human body part, it is determined whether the human body part enters the predetermined sensing area. When the human body part enters the predetermined sensing area, the control screen follows the predetermined rule according to the motion track of the human body part. scroll. In this way, it is not necessary to rely on an external input device, and by sensing the spatial motion of the human body part, the screen scroll can be controlled to give the user a better experience.
  • FIG. 1 is a flowchart of a method for scrolling a somatosensory control screen according to an embodiment of the present invention
  • FIG. 2 is a flowchart of an activated somatosensory interaction system according to an embodiment of the present invention
  • FIG. 3 is a schematic structural diagram of a somatosensory interaction system according to an embodiment of the present invention.
  • FIG. 4 is a schematic structural diagram of another somatosensory interaction system according to an embodiment of the present invention.
  • FIG. 5 is a schematic structural diagram of an activation module of a somatosensory interaction system according to an embodiment of the present invention
  • FIG. 6 is a schematic diagram of an operation of scrolling through a somatosensory control screen according to an embodiment of the present invention
  • FIG. 7 is a schematic diagram of another operation of scrolling through a somatosensory control screen according to an embodiment of the present invention.
  • FIG. 1 is a method for scrolling a somatosensory control screen according to an embodiment of the present invention.
  • a method for scrolling a somatosensory control screen of the embodiment includes:
  • S101 collecting a three-dimensional stereoscopic image of a human body part in an activated state of the somatosensory interaction system
  • the somatosensory interaction system needs to be activated before the somatosensory interaction needs to be performed.
  • FIG. 2 is a flowchart of an activated somatosensory interaction system according to an embodiment of the present invention. As shown in the figure, the activation of the somatosensory interaction system includes the following steps:
  • a three-dimensional stereoscopic image within a predetermined spatial range is acquired by a 3D sensor.
  • the acquired three-dimensional stereoscopic image includes all objects of the 3D sensor lens monitoring range.
  • the 3D sensor lens includes a table, a chair, and a person, and the acquired three-dimensional image includes all of these objects.
  • S202 processing the three-dimensional stereoscopic image, determining whether the three-dimensional stereoscopic image includes a three-dimensional stereoscopic image for activating a human body part of the somatosensory interaction system;
  • the three-dimensional stereoscopic image acquired by the 3D sensor is processed to determine whether the three-dimensional stereoscopic image includes a three-dimensional stereoscopic image for activating a human body part of the somatosensory interaction system. For example, if the preset human body part for activating the somatosensory interaction system is a human hand, it is recognized whether the human hand is included from the acquired three-dimensional stereoscopic image. If a three-dimensional stereoscopic image for activating a human body part of the somatosensory interaction system is included in the three-dimensional stereoscopic image, step S203 is performed.
  • S203 processing a three-dimensional stereoscopic image of a human body part, and converting into an activation instruction
  • the processing of the three-dimensional image of the human body part into the activation instruction specifically includes: extracting the feature parameters of the three-dimensional image of the human body part, and the characteristic parameters include the three-dimensional coordinates of the human body part and the spatial motion track of the human body part,
  • the feature parameter is matched with the feature parameter of the pre-stored activated somatosensory interaction system.
  • an instruction corresponding to the pre-stored feature parameter is acquired as an activation command.
  • the three-dimensional images of the collected human body parts are extracted, and the feature parameters are obtained, and the feature parameters include position parameters and motion track parameters.
  • the positional parameter that is, the spatial position of the human body part
  • the positional parameter is represented by three-dimensional coordinates, that is, the motion trajectory of the human body part in space.
  • the parameter extraction includes the actual three-dimensional coordinates X, Y, Z of the palm of the hand to determine the specific positional relationship between the palm and the 3D sensor. It also includes the spatial motion trajectory of the palm, that is, the motion trajectory of the grip.
  • the extracted feature parameters are matched with the pre-stored feature parameters for activating the somatosensory interaction system.
  • the somatosensory interaction system is activated through a palm motion.
  • the characteristic parameters of the pre-stored palm motion include A, B, and C.
  • a three-dimensional image of a palm is collected, and the extracted feature parameters include A', B', and C.
  • the extracted feature parameters include A', B', and C.
  • ', A', B', C' are matched with A, B, C, and it is judged whether the matching degree reaches a predetermined threshold.
  • the predetermined threshold is a value of a preset matching degree.
  • the predetermined threshold may be set to 80%.
  • the system may further determine that the three-dimensional stereoscopic image includes a system for activating the somatosensory interaction system. Whether the three-dimensional image of the human body part lasts for a predetermined time. When the predetermined time reaches the predetermined time, the instruction corresponding to the pre-stored feature parameter is acquired as the activation command. In this way, it is possible to effectively prevent the false trigger from activating the somatosensory interaction system.
  • the preset body part for activating the somatosensory intersection system is the palm of the hand.
  • a palm motion may be inadvertently made. After the system collects and recognizes the palm motion, it further determines whether the palm image contains the palm or not. Time, if it does not continue to reach the scheduled time, it can be judged that this is only a misoperation, then the somatosensory interaction system will not be activated.
  • the predetermined time here can be preset according to needs, for example, set to 10 seconds, 30 seconds, and the like.
  • a progress bar may be displayed on the screen to prompt the somatosensory interaction system activation state when the system collects and recognizes the human body part and does not reach the predetermined time.
  • the progress bar can display the speed of the system activation, the degree of completion, the amount of remaining unfinished tasks, and the processing time that may be required in real time.
  • the progress bar can be displayed in a rectangular strip. When the progress bar is full, it means that the somatosensory interaction system condition is activated and the somatosensory interaction system is activated. In this way, the user can have an intuitive understanding of the activation state of the somatosensory interaction system, and can also cause the user who misunderstands to stop the gesture action in time to avoid false triggering of the activation of the somatosensory interaction system.
  • the somatosensory interaction system is activated to enter the somatosensory interaction state.
  • the user may be prompted to activate the corresponding somatosensory interaction system.
  • it can be prompted by displaying the predetermined area of the screen in a highlighted state.
  • the predetermined area here may be a preset body sensing area corresponding to a plane area on the screen, such as an area of a certain area on the left side of the screen, or an area of a certain area on the right side of the screen. Of course, it can also be the entire screen.
  • the user may also be prompted by other means, such as by popping up a prompt that the somatosensory interaction system has been activated, or by a voice prompt, etc., which is not limited by the present invention.
  • an icon that moves in synchronization with the human body part is displayed on the screen.
  • the icon that moves synchronously with the human body part may be an icon similar to the human body part, for example, the human body part is a human hand, and the icon may be a hand shape icon. Of course, it can be other forms of icons, such as triangle icons, dot icons, and so on.
  • the icon on the screen follows the movement of the body part and moves correspondingly on the screen. For example, the human hand moves to the right in space, and the icon also moves to the right along the screen.
  • a three-dimensional stereoscopic image of the human body part within a predetermined spatial range is acquired by the 3D infrared sensor.
  • the 3D infrared sensor can collect a three-dimensional image of the object in the spatial position, and the acquired three-dimensional image includes the spatial position coordinates of the object and the spatial motion track.
  • the spatial motion trajectory described in the embodiments of the present invention includes the posture of the human body component and the specific action of the human body component. For example, if the user makes a fist gesture in front of the 3D body sensor and slides in the space range, the 3D body sensor collects the three-dimensional image of the user's hand, and extracts the feature of the three-dimensional image of the hand, that is, the hand is obtained. The distance from the 3D sensor to the 3D coordinates of the 3D sensor, as well as the gripping and sliding movements of the hand. The processing of other three-dimensional stereoscopic images is similar to this, and the present embodiment will not be described by way of example.
  • the human body part mentioned in the embodiment of the present invention may be a human hand. Of course, it can also be other human body parts for operation such as a human face, a human foot, and the like.
  • S102 Perform feature extraction on a three-dimensional image of a human body part to obtain a feature parameter, where the feature parameter includes a three-dimensional coordinate of the human body part and a spatial motion track of the human body part;
  • the three-dimensional image of the collected human body part is subjected to feature extraction to obtain the feature parameters of the acquired three-dimensional space, wherein the feature parameters include the spatial three-dimensional coordinates of the human body part and the spatial motion track of the human body part.
  • feature extraction it is possible to recognize the specific spatial position of the body part from the 3D sensor and the action of the body part. For example, a gripping action by a human hand, by collecting the stereoscopic image of the grip, and by feature extraction, can determine the specific spatial position of the human hand according to the parameters extracted by the feature and recognize that the motion is a gripping action. .
  • the present invention includes a process of learning training to establish a training database. For example, in order to recognize the movement of a human hand, the system collects three-dimensional images of various gripping actions, and learns these different gripping actions to obtain specific feature parameters for identifying this specific action. For each different action, the system will do such a learning training process, and the specific characteristic parameters corresponding to various specific actions constitute a training database. When the system acquires a three-dimensional stereo image, the stereo image is extracted from the feature, and a specific action matching the same is found in the training database as a recognition result.
  • S103 determining, according to the three-dimensional coordinates of the human body part, whether the human body part enters the predetermined sensing area;
  • the predetermined sensing area here may be a preset spatial range for sensing the screen scrolling action and responding.
  • the screen when the body part enters the predetermined sensing area, the screen displays a corresponding prompt.
  • the predetermined sensing area may be displayed in a highlighted state to prompt the user to perform a control action within the predetermined sensing area.
  • the predetermined sensing area can also prompt the user in other ways. For example, highlighting, flashing display, display area stripes, and so on.
  • step S104 is performed. Otherwise, no response is made.
  • S104 The control screen scrolls according to a predetermined rule according to a motion track of the human body part
  • the spatial motion trajectory control screen of the human body part in the feature extraction parameter is scrolled according to a predetermined rule.
  • the controlling the screen according to the spatial motion trajectory of the human body part according to the predetermined rule of scrolling comprises: matching the spatial motion trajectory of the human body part with the pre-stored spatial motion trajectory of the control screen scrolling, and when the matching reaches a predetermined matching threshold, controlling according to a predetermined rule The screen scrolls.
  • the action of pushing the palm upwards correspondingly scrolls up correspondingly, and the action of pushing the palm downwards corresponds to the downward scrolling of the screen.
  • the control screen scrolls upward, and the action of pulling the palm downward is controlled.
  • the screen scrolls down.
  • the somatosensory interaction system When the body part leaves the predetermined sensing area for a predetermined time, the somatosensory interaction system enters the locked state, and the interaction of the somatosensory control control screen scrolling can be entered only after the somatosensory interaction system is activated again.
  • the predetermined sensing area is divided into a left side sensing area, a right side sensing area, an upper side sensing area or a lower side sensing area of the screen at a predetermined distance from the center point of the screen, and the predetermined distance is greater than 0.
  • the predetermined area on the left side of the screen and the space area corresponding to the predetermined area on the right side of the screen may be set as the predetermined sensing area, and the screen is controlled to the screen as long as the human body part enters the sensing area on the left side of the screen and performs the same action as the preset scrolling screen. Scroll left, if the body part enters the sensing area on the right side of the screen, the control screen scrolls to the right side of the screen.
  • the specific implementation is similar to that of the other areas as the sensing area. This embodiment is not illustrated by way of example.
  • the scrolling of the control screen according to the motion trajectory of the human body part according to a predetermined rule includes: the control screen scrolls in a continuous or discrete manner according to the motion trajectory of the human body part in a direction of the predetermined sensing area.
  • the continuous mode means that the screen content scrolls smoothly, the content stops at any position on the screen, and the discrete way means that the screen content scrolls in discrete ways, each time scrolling a full screen of content, the screen content stays at a fixed position .
  • the screen continues to scroll to the predetermined sensing area according to a predetermined rule until there is no content to display. If the human body part leaves the predetermined sensing area, the content of the current screen program will be scrolled for display.
  • the preset scrolls the screen in a continuous manner
  • the picture scrolls to the predetermined sensing area in a smooth manner until the human body part leaves the predetermined sensing area. Stop and scroll to the picture on the current screen. If the body part does not leave the predetermined sensing area, the screen will scroll until no picture is available for display. And if the preset is to scroll the screen in a discrete manner, then each time you want to schedule the sensing area to scroll a full screen of the image and stay for a predetermined time, then continue scrolling the next full screen image, and so on, if the human body part leaves the predetermined sensing area. , then stay on the interface displayed on the current screen. If the body part does not leave the predetermined sensing area, scroll according to a full screen picture until there is no picture to display.
  • the predetermined space area range on the left side of the preset screen and the predetermined space area range on the right side of the screen are predetermined sensing areas
  • the action of the preset palm sliding is an action of controlling the screen scrolling
  • the screen may be set to perform a screen discrant each time the palm performs a sliding motion. The scrolling while the palm continues to slide corresponds to the continuous scrolling of the screen.
  • FIG. 6 is a schematic diagram of an operation of scrolling through a somatosensory control screen according to an embodiment of the present invention.
  • the screen in the activation state of the somatosensory interaction system, when the human hand enters the right sensing area, if the human hand stays In the sensing area on the right, the screen continues to scroll to the right in a continuous or discrete manner until no content is available for display. If the person leaves the right sensing area during the period, stop scrolling the screen and the screen will scroll to the current content for display.
  • FIG. 7 is a schematic diagram of another operation of scrolling through a somatosensory control screen according to an embodiment of the present invention.
  • the screen As shown in the active state of the somatosensory interaction system, when the human hand enters the left sensing area, if the human hand stays In the left sensing area, the screen continues to scroll left or left in a continuous or discrete manner until no content is available for display. If the person leaves the right sensing area during the period, stop scrolling and the screen will scroll to the current content for display.
  • the screen may be scrolled according to the preset to the predetermined sensing area, or may be scrolled in the opposite direction to the predetermined sensing area according to the preset.
  • Direction scrolling can be set according to user needs. For example, when the human hand enters the right sensing area in Figure 6, if the human hand stays in the right sensing area, the screen can scroll to the left in a continuous or discrete manner, and when the human hand in Figure 7 enters the left sensing area, if the human hand enters the left sensing area, Staying in the left sensing area, the screen can scroll to the right in a continuous or discrete manner.
  • the somatosensory operation of the screen scrolling is similar, and the present invention does not exemplify one by one.
  • the method for controlling the scrolling of the somatosensory control screen acquires a three-dimensional stereoscopic image of the human body part in the state of activating the somatosensory interactive system, and extracts characteristic parameters according to the three-dimensional stereoscopic image of the human body part, wherein the characteristic parameters include the human body.
  • the three-dimensional coordinates of the part and the movement track of the human body part determine whether the human body part enters the predetermined sensing area according to the three-dimensional coordinates of the human body part.
  • the control screen scrolls according to a predetermined rule according to the motion track of the human body part. In this way, it is not necessary to rely on an external input device, and by sensing the spatial motion of the human body part, the screen scroll can be controlled to give the user a better experience.
  • FIG. 3 is a schematic structural diagram of a somatosensory interaction system according to an embodiment of the present invention.
  • the somatosensory interaction system of the embodiment is used to execute the method of the somatosensory control screen scrolling of the embodiment shown in FIG.
  • the somatosensory interaction system 100 of the present embodiment includes an acquisition module 11, a feature extraction module 12, a determination module 13, and a control module 14, wherein:
  • the collecting module 11 is configured to collect a three-dimensional stereoscopic image of a human body part in an activated state of the somatosensory interaction system
  • the somatosensory interaction system needs to be activated first.
  • FIG. 4 is a schematic structural diagram of another somatosensory interaction system according to an embodiment of the present invention.
  • the somatosensory interaction system provided in this embodiment includes the same function as the somatosensory interaction system provided in the embodiment shown in FIG.
  • an activation module 15 is further included for controlling the activation of the somatosensory interaction system.
  • the same function modules as those shown in Figure 3 have the same specific function implementation. For details, please refer to the related description below.
  • FIG. 5 is a schematic structural diagram of an activation module according to an embodiment of the present invention.
  • the activation module 15 includes an acquisition unit 151, a determination unit 152, a conversion unit 153, and an activation unit 154, where:
  • the collecting unit 151 is configured to collect a three-dimensional stereoscopic image
  • the acquisition unit 151 collects a three-dimensional stereoscopic image within a predetermined spatial range through a 3D sensor.
  • the acquired three-dimensional stereoscopic image includes all objects of the 3D sensor lens monitoring range.
  • the 3D sensor lens includes a table, a chair, and a person, and the acquired three-dimensional image includes all of these objects.
  • the determining unit 152 is configured to process the three-dimensional stereoscopic image, determine whether the three-dimensional stereoscopic image includes a three-dimensional stereoscopic image for activating a human body part of the somatosensory interactive system, and output the determination result to the conversion unit 153.
  • the determining unit 152 processes the three-dimensional stereoscopic image acquired by the 3D sensor, and determines whether the three-dimensional stereoscopic image includes a three-dimensional stereoscopic image for activating a human body part of the somatosensory interactive system. For example, if the preset human body part for activating the somatosensory interaction system is a human hand, it is recognized whether the human hand is included from the acquired three-dimensional stereoscopic image.
  • the conversion unit 153 is configured to process the three-dimensional stereoscopic image of the human body part and convert it into an activation command.
  • the processing of the three-dimensional image of the human body part into the activation instruction specifically includes: extracting the feature parameters of the three-dimensional image of the human body part, and the characteristic parameters include the three-dimensional coordinates of the human body part and the spatial motion track of the human body part,
  • the feature parameter is matched with the feature parameter of the pre-stored activated somatosensory interaction system.
  • an instruction corresponding to the pre-stored feature parameter is acquired as an activation command.
  • the three-dimensional images of the collected human body parts are extracted, and the feature parameters are obtained, and the feature parameters include position parameters and motion track parameters.
  • the positional parameter that is, the spatial position of the human body part
  • the positional parameter is represented by three-dimensional coordinates, that is, the motion trajectory of the human body part in space.
  • the parameter extraction includes the actual three-dimensional coordinates X, Y, Z of the palm of the hand to determine the specific positional relationship between the palm and the 3D sensor. It also includes the spatial motion trajectory of the palm, that is, the motion trajectory of the grip.
  • the extracted feature parameters are matched with the pre-stored feature parameters for activating the somatosensory interaction system.
  • the somatosensory interaction system is activated through a palm motion.
  • the characteristic parameters of the pre-stored palm motion include A, B, and C.
  • a three-dimensional image of a palm is collected, and the extracted feature parameters include A', B', and C.
  • the extracted feature parameters include A', B', and C.
  • ', A', B', C' are matched with A, B, C, and it is judged whether the matching degree reaches a predetermined threshold.
  • the predetermined threshold is a value of a preset matching degree.
  • the predetermined threshold may be set to 80%.
  • the system may further determine that the three-dimensional stereoscopic image includes a system for activating the somatosensory interaction system. Whether the three-dimensional image of the human body part lasts for a predetermined time. When the predetermined time reaches the predetermined time, the instruction corresponding to the pre-stored feature parameter is acquired as the activation command. In this way, it is possible to effectively prevent the false trigger from activating the somatosensory interaction system.
  • the preset body part for activating the somatosensory intersection system is the palm of the hand.
  • a palm motion may be inadvertently made. After the system collects and recognizes the palm motion, it further determines whether the palm image contains the palm or not. Time, if it does not continue to reach the scheduled time, it can be judged that this is only a misoperation, then the somatosensory interaction system will not be activated.
  • the predetermined time here can be preset according to needs, for example, set to 10 seconds, 30 seconds, and the like.
  • a progress bar may be displayed on the screen to prompt the somatosensory interaction system activation state when the system collects and recognizes the human body part and does not reach the predetermined time.
  • the progress bar can display the speed of the system activation, the degree of completion, the amount of remaining unfinished tasks, and the processing time that may be required in real time.
  • the progress bar can be displayed in a rectangular strip. When the progress bar is full, it means that the somatosensory interaction system condition is activated and the somatosensory interaction system is activated. In this way, the user can have an intuitive understanding of the activation state of the somatosensory interaction system, and can also cause the user who misunderstands to stop the gesture action in time to avoid false triggering of the activation of the somatosensory interaction system.
  • the activation unit 154 is configured to activate the somatosensory interaction system in accordance with the activation instruction.
  • the somatosensory interaction system is activated to enter the somatosensory interaction state.
  • the acquisition module 11 collects a three-dimensional stereoscopic image of a human body part entering the sensing area under the activation state of the somatosensory interaction system.
  • a three-dimensional stereoscopic image of the human body part within a predetermined spatial range is acquired by the 3D infrared sensor.
  • the 3D infrared sensor can collect a three-dimensional image of the object in the spatial position, and the acquired three-dimensional image includes the spatial position coordinates of the object and the spatial motion track.
  • the human body part mentioned in the embodiment of the present invention may be a human hand. Of course, it can also be other human body parts for operation such as a human face, a human foot, and the like.
  • the feature extraction module 12 is configured to perform feature extraction on the three-dimensional image of the human body part to acquire feature parameters, where the feature parameters include three-dimensional coordinates of the human body part and spatial motion trajectories of the human body part;
  • the feature extraction module 12 performs feature extraction on the collected three-dimensional image of the human body part to obtain the feature parameters of the acquired three-dimensional space, wherein the feature parameters include the spatial three-dimensional coordinates of the human body part and the space of the human body part. Movement track.
  • feature extraction it is possible to recognize the specific spatial position of the body part from the 3D sensor and the action of the body part. For example, a gripping action by a human hand, by collecting the stereoscopic image of the grip, and by feature extraction, can determine the specific spatial position of the human hand according to the parameters extracted by the feature and recognize that the motion is a gripping action. .
  • the determining module 13 is configured to determine whether the human body part enters the predetermined sensing area according to the three-dimensional coordinates of the human body part;
  • the predetermined sensing area here may be a preset spatial range for sensing the screen scrolling action and responding.
  • the control module 14 is configured to: when the body part enters the predetermined sensing area, the control screen scrolls according to a predetermined rule according to the motion track of the body part.
  • control module 14 controls the screen to scroll according to a predetermined rule according to the spatial motion trajectory of the human body part in the feature extraction parameter.
  • the controlling the screen according to the spatial motion trajectory of the human body part according to the predetermined rule of scrolling comprises: matching the spatial motion trajectory of the human body part with the pre-stored spatial motion trajectory of the control screen scrolling, and when the matching reaches a predetermined matching threshold, controlling according to a predetermined rule The screen scrolls.
  • the action of pushing the palm upwards correspondingly scrolls up correspondingly, and the action of pushing the palm downwards corresponds to the downward scrolling of the screen.
  • the control screen scrolls upward, and the action of pulling the palm downward is controlled.
  • the screen scrolls down.
  • the somatosensory interaction system When the body part leaves the predetermined sensing area for a predetermined time, the somatosensory interaction system enters the locked state, and the interaction of the somatosensory control control screen scrolling can be entered only after the somatosensory interaction system is activated again.
  • the predetermined sensing area is divided into a left side sensing area, a right side sensing area, an upper side sensing area or a lower side sensing area of the screen at a predetermined distance from the center point of the screen, and the predetermined distance is greater than 0.
  • the predetermined area on the left side of the screen and the space area corresponding to the predetermined area on the right side of the screen may be set as the predetermined sensing area, and the screen is controlled to the screen as long as the human body part enters the sensing area on the left side of the screen and performs the same action as the preset scrolling screen. Scroll left, if the body part enters the sensing area on the right side of the screen, the control screen scrolls to the right side of the screen.
  • the specific implementation is similar to that of the other areas as the sensing area. This embodiment is not illustrated by way of example.
  • the scrolling of the control screen according to the motion trajectory of the human body part according to a predetermined rule includes: the control screen scrolls in a continuous or discrete manner according to the motion trajectory of the human body part in a direction of the predetermined sensing area.
  • the continuous mode means that the screen content scrolls smoothly, the content stops at any position on the screen, and the discrete way means that the screen content scrolls in discrete ways, each time scrolling a full screen of content, the screen content stays at a fixed position .
  • the screen continues to scroll to the predetermined sensing area according to a predetermined rule until there is no content to display. If the human body part leaves the predetermined sensing area, the content of the current screen program is scrolled for display.
  • the preset scrolls the screen in a continuous manner
  • the picture scrolls to the predetermined sensing area in a smooth manner until the human body part leaves the predetermined sensing area. Stop and scroll to the picture on the current screen. If the body part does not leave the predetermined sensing area, the screen will scroll until no picture is available for display. And if the preset is to scroll the screen in a discrete manner, then each time you want to schedule the sensing area to scroll a full screen of the image and stay for a predetermined time, then continue scrolling the next full screen image, and so on, if the human body part leaves the predetermined sensing area. , then stay on the interface displayed on the current screen. If the body part does not leave the predetermined sensing area, scroll according to a full screen picture until there is no picture to display.
  • the predetermined space area range on the left side of the preset screen and the predetermined space area range on the right side of the screen are predetermined sensing areas
  • the action of the preset palm sliding is an action of controlling the screen scrolling
  • the screen may be set to perform a screen discrant each time the palm performs a sliding motion. The scrolling while the palm continues to slide corresponds to the continuous scrolling of the screen.
  • the somatosensory interaction system of the present embodiment may further include a display module 16 for displaying an icon moving synchronously with the human body part on the screen when the human body part enters the predetermined sensing area.
  • the icon that moves synchronously with the human body part may be an icon similar to the human body part, for example, the human body part is a human hand, and the icon may be a hand shape icon. Of course, it can be other forms of icons, such as triangle icons, dot icons, and so on.
  • the icon on the screen follows the movement of the body part and moves correspondingly on the screen. For example, the human hand moves to the right in space, and the icon also moves to the right along the screen.
  • the display module 16 is further configured to display a corresponding prompt on the screen when the body part enters the predetermined sensing area.
  • the display module 16 may display the predetermined sensing area in a highlighted state to prompt the user to perform a control action in the predetermined sensing area.
  • the predetermined sensing area can also prompt the user in other ways. Such as highlighting, flashing display or display area stripes and so on.
  • the embodiment of the present invention further provides an electronic device, where the electronic device includes the somatosensory interaction system described in the foregoing embodiment.
  • the electronic device can be, but is not limited to, a smart TV, a smart phone, a tablet computer, a notebook computer, and the like.
  • the somatosensory interaction system collects a three-dimensional stereoscopic image of a human body part in a state of activating a somatosensory interaction system, and extracts feature parameters according to a three-dimensional stereoscopic image of the human body part, wherein the characteristic parameter includes a three-dimensional image of the human body part.
  • the coordinate and the movement track of the human body part determine whether the human body part enters the predetermined sensing area according to the three-dimensional coordinates of the human body part.
  • the control screen scrolls according to a predetermined rule according to the motion track of the human body part. In this way, it is not necessary to rely on an external input device, and by sensing the spatial motion of the human body part, the screen scroll can be controlled to give the user a better experience.
  • the disclosed system, apparatus, and method may be implemented in other manners.
  • the device embodiments described above are merely illustrative.
  • the division of the modules or units is only a logical function division.
  • there may be another division manner for example, multiple units or components may be used. Combinations can be integrated into another system, or some features can be ignored or not executed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, and may be in an electrical, mechanical or other form.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the above integrated unit can be implemented in the form of hardware or in the form of a software functional unit.
  • the integrated unit if implemented in the form of a software functional unit and sold or used as a standalone product, may be stored in a computer readable storage medium.
  • the technical solution of the present invention which is essential or contributes to the prior art, or all or part of the technical solution, may be embodied in the form of a software product stored in a storage medium.
  • a number of instructions are included to cause a computer device (which may be a personal computer, server, or network device, etc.) or a processor to perform all or part of the steps of the methods described in various embodiments of the present invention.
  • the foregoing storage medium includes: a U disk, a mobile hard disk, a read only memory (ROM, Read-Only) Memory, random access memory (RAM), disk or optical disk, and other media that can store program code.

Abstract

一种体感控制屏幕滚动方法、体感交互系统及电子设备。所述方法包括:在体感交互系统激活状态下,采集人体部位的三维立体图像(S101);将人体部位的三维立体图像进行特征提取获取特征参数,特征参数包括所述人体部位的三维坐标以及人体部位的运动轨迹(S102);根据人体部位的三维坐标,判断人体部位是否进入预定感应区域(S103);当人体部位进入预定感应区域时,控制屏幕根据人体部位的运动轨迹按照预定规则滚动(S104)。通过上述方法,能够不需要依赖于外部输入设备,通过感应人体部位的空间动作,就能控制屏幕滚动,给用户更好的使用体验。

Description

一种体感控制屏幕滚动方法、体感交互系统及电子设备
【技术领域】
本发明人机交互领域,具体涉及一种体感控制屏幕滚动方法、体感交互系统及电子设备。
【背景技术】
人机交互技术是指通过输入输出设备,以有效的方式实现人与机器对话的技术。现有的人机交互的交互方式通常是通过鼠标、键盘、触摸屏或者手柄等外部设备与机器系统进行交互,机器系统再做出相应的响应。比如当需要控制屏幕滚动时,要么通过鼠标拖动实现屏幕滚动,要么通过人体部位如手指等在触摸屏上进行滑动实现屏幕滚动。
现有的控制屏幕滚动的方式,都必须与输入设备比如鼠标、触摸屏进行直接接触才能完成屏幕滚动操作,使得用户控制屏幕滚动的操作必须依赖于外部设备,束缚用户的行为方式,具体实现显得不自然,不真实。
【发明内容】
本发明主要解决的技术问题是提供一种体感控制屏幕滚动方法、体感交互系统及电子设备,能够不需要依赖外部输入设备,通过感应人体部位的动作,就能控制屏幕进行滚动。
为解决上述技术问题,本发明采用的一个技术方案是:提供一种体感控制屏幕滚动方法,所述方法包括:在体感交互系统激活状态下,采集人体部位的三维立体图像;将所述人体部位的三维立体图像进行特征提取获取特征参数,所述特征参数包括所述人体部位的三维坐标以及所述人体部位的运动轨迹;根据所述人体部位的三维坐标,判断所述人体部位是否进入预定感应区域;当所述人体部位进入预定感应区域时,控制屏幕根据所述人体部位的运动轨迹按照预定规则滚动。
其中,所述预定感应区域分为距离屏幕中心点预定距离的屏幕左侧感应区域、屏幕右侧感应区域、屏幕上侧感应区域以及屏幕下侧感应区域,所述预定距离大于0;所述控制屏幕根据所述人体部位的运动轨迹按照预定规则滚动包括:控制屏幕根据所述人体部位的运动轨迹以连续或离散的方式向所述预定感应区域方向滚动。
其中,所述方法还包括:当所述人体部位进入预定感应区域时,在屏幕上显示与所述人体部位同步移动的图标。
其中,所述方法还包括:当所述人体部位进入预定感应区域时,屏幕显示相应的提示。
其中,所述人体部位为人手。
为解决上述技术问题,本发明采用的另一个技术方案是:提供一种体感交互系统,所述系统包括采集模块、特征提取模块、判断模块以及控制模块,其中:所述采集模块用于在体感交互系统激活状态下,采集人体部位的三维立体图像;所述特征提取模块用于将所述人体部位的三维立体图像进行特征提取获取特征参数,所述特征参数包括所述人体部位的三维坐标以及所述人体部位的运动轨迹;所述判断模块用于根据所述人体部位的三维坐标,判断所述人体部位是否进入预定感应区域;所述控制模块用于,当所述人体部位进入预定感应区域时,控制屏幕根据所述人体部位的运动轨迹按照预定规则滚动。
其中,所述预定感应区域分为距离屏幕中心点预定距离的屏幕左侧感应区域、屏幕右侧感应区域、屏幕上侧感应区域以及屏幕下侧感应区域,所述预定距离大于0;所述控制模块用于控制屏幕根据所述人体部位的运动轨迹以连续或离散的方式向所述预定感应区域方向滚动。
其中,所述系统还包括显示模块,所述显示模块用于,当所述人体部位进入预定感应区域时,在屏幕上显示相应的提示。
其中,所述显示模块还用于当所述人体部位进入预定感应区域时,将所述预定感应区域对应到屏幕上的平面区域以高亮的方式进行显示。
为解决上述技术问题,本发明提供的还有一种技术方案是:提供一种电子设备,所述电子设备包括体感交互系统,所述体感交互系统包括采集模块、特征提取模块、判断模块以及控制模块,其中:所述采集模块用于在体感交互系统激活状态下,采集人体部位的三维立体图像;所述特征提取模块用于将所述人体部位的三维立体图像进行特征提取获取特征参数,所述特征参数包括所述人体部位的三维坐标以及所述人体部位的运动轨迹;所述判断模块用于根据所述人体部位的三维坐标,判断所述人体部位是否进入预定感应区域;所述控制模块用于,当所述人体部位进入预定感应区域时,控制屏幕根据所述人体部位的运动轨迹按照预定规则滚动。
其中,所述预定感应区域分为距离屏幕中心点预定距离的屏幕左侧感应区域、屏幕右侧感应区域、屏幕上侧感应区域以及屏幕下侧感应区域,所述预定距离大于0;所述控制模块用于控制屏幕根据所述人体部位的运动轨迹以连续或离散的方式向所述预定感应区域方向滚动。
其中,所述系统还包括显示模块,所述显示模块用于当所述人体部位进入预定感应区域时,在屏幕上显示与所述人体部位同步移动的图标。
其中,所述显示模块还用于,当所述人体部位进入预定感应区域时,在屏幕上显示相应的提示。
本发明的有益效果是:区别于现有技术的情况,本发明在激活体感交互系统的状态下,采集人体部位的三维立体图像,根据人体部位的三维立体图像进行特征提取获取特征参数,其中特征参数包括人体部位的三维坐标以及人体部位的运动轨迹,根据人体部位的三维坐标,判断人体部位是否进入预定感应区域,当人体部位进入预定感应区域时,控制屏幕根据人体部位的运动轨迹按照预定规则滚动。通过这样的方式,不需要依赖于外部输入设备,通过感应人体部位的空间动作,就能控制屏幕滚动,给用户更好的使用体验。
【附图说明】
图1是本发明实施例提供的一种体感控制屏幕滚动方法的流程图;
图2是本发明实施例提供的激活体感交互系统的流程图;
图3是本发明实施例提供的一种体感交互系统的结构示意图;
图4是本发明实施例提供的另一种体感交互系统的结构示意图;
图5是本发明实施例提供的体感交互系统的激活模块的结构示意图;
图6是本发明实施例提供的一种通过体感控制屏幕滚动的操作示意图;
图7是本发明实施例提供的另一种通过体感控制屏幕滚动的操作示意图。
【具体实施方式】
请参阅图1,图1是本发明实施例提供的一种体感控制屏幕滚动方法,如图所示,本实施例的体感控制屏幕滚动的方法包括:
S101:在体感交互系统激活状态下,采集人体部位的三维立体图像;
在本发明实施例中,需要进行体感交互之前,需要先激活体感交互系统。
其中,请进一步参阅图2,图2是本发明实施例提供的激活体感交互系统的流程图,如图所示,激活体感交互系统包括以下步骤:
S201:采集三维立体图像;
通过3D传感器采集预定空间范围内的三维立体图像。所采集的三维立体图像包括3D传感器镜头监控范围的所有物体。比如3D传感器镜头前包括桌子、椅子以及人,那么所采集的三维立体图像包括所有的这些物件。
S202:对三维立体图像进行处理,判断三维立体图像是否包含用于激活体感交互系统的人体部位的三维立体图像;
对3D体感器采集的三维立体图形进行处理,判断该三维立体图像中是否包含用于激活体感交互系统的人体部位的三维立体图像。比如预设的用于激活体感交互系统的人体部位为人手,则从采集的三维立体图像中识别是否包括人手。如果三维立体图像中包括用于激活体感交互系统的人体部位的三维立体图像,则执行步骤S203。
S203:对人体部位的三维立体图像进行处理,转化为激活指令;
其中,对人体部位的三维立体图像进行处理,转化为激活指令具体包括:将人体部位的三维立体图像进行特征提取获取特征参数,特征参数包括人体部位的三维坐标以及人体部位的空间运动轨迹,将特征参数与预存的激活体感交互系统的特征参数进行匹配,当匹配度达到预定阈值时,获取与预存的特征参数对应的指令,以作为激活指令。
将采集的人体部位的三维立体图像进行特征提取,获取得到特征参数,特征参数包括位置参数以及动作轨迹参数。位置参数即人体部位所处的空间位置,用三维坐标表示,运动轨迹即人体部位在空间上的运动轨迹。比如一个手掌抓握的动作,其参数提取即包括手掌当前所处的实际三维坐标X、Y、Z的具体数值,以确定手掌与3D传感器的具体位置关系。还包括手掌的空间运动轨迹即抓握的动作轨迹。
在提取得到特征参数以后,将提取的特征参数与预存的用于激活体感交互系统的特征参数进行匹配。
比如通过一个手掌的动作来激活体感交互系统,预存的手掌的动作的特征参数包括A、B、C,当前采集到一个手掌的三维立体图像,提取得到的特征参数包括A'、B'、C',将A'、B'、C'与A、B、C进行匹配,并判断匹配度是否达到预定阈值。
当提取的特征参数与预存的用于激活体感交互系统的特征参数的匹配度达到预定阈值时,获取与预存的特征参数对应的指令,以作为激活指令。预定阈值是预先设置的匹配程度的值,比如可以设置预定阈值为80%,当匹配度达到80%或以上,即获取与预存的特征参数对应的指令,以作为激活指令。
作为一种优选的实现方案,在提取的特征参数与预存的用于激活体感交互系统的特征参数的匹配度达到预定阈值时,系统可以进一步判断所述三维立体图像中包含用于激活体感交互系统的人体部件的三维立体图像所持续的时间是否达到预定时间。在持续的时间达到预定时间时,才获取与预存的特征参数对应的指令,以作为激活指令。通过这样的方式,可以有效防止误触发激活体感交互系统。
比如预设的用于激活体感交行系统的人体部件为手掌。当3D体感器前面用户当前正与另一个人在聊天,过程中可能不经意的会做出一个手掌的动作,系统在采集并识别出这个手掌动作之后,进一步判断立体图像中包含该手掌是否持续预定时间,如果没有持续达到预定时间,可以判断到这只是一个误操作,则不会激活体感交互系统。其中,这里的预定时间可以根据需要预先设定,比如设定为10秒、30秒等等。
作为一种更进一步的优选方案,当系统采集并识别到人体部位且持续未达到预定时间之前,可以在屏幕上显示一个进度条,以提示体感交互系统激活状态。进度条可以实时的,以图片形式显示系统激活的速度,完成度,剩余未完成任务量的大小,和可能需要处理时间。作为一种可能的实现,进度条可以长方形条状显示。当进度条满时,即表示达到激活体感交互系统条件,激活体感交互系统。通过这样的方式,能够让用户对体感交互系统激活状态的有直观的了解,也能够让误操作的用户及时停止手势动作以避免误触发激活体感交互系统。
S204:根据激活指令激活体感交互系统。
根据获取的激活指令,激活体感交互系统,以进入体感交互状态。
其中,在激活体感交互系统之后,可以给用户相应的体感交互系统已被激活的提示。其中,可以通过将屏幕预定区域以高亮状态进行显示作为提示。这里的预定区域可以是预设的体感感应区域对应到屏幕上的平面区域,比如屏幕左侧的一定面积的区域,或屏幕右侧一定面积的区域等。当然,也可以是整个屏幕。
当然,也可以通过其他的方式给用户提示,比如通过弹出体感交互系统已激活的提示,或者通过语音提示等等,本发明对此不作限定。
另外,在体感交互系统被激活后,在屏幕上显示与人体部位同步移动的图标。其中,与人体部位同步移动的图标可以是跟所述人体部位相似的图标,比如人体部位是人手,该图标可以是一个手形状的图标。当然,也可以是其他形式的图标,比如三角图标,圆点图标等。该屏幕上的图标跟随人体部位的移动而在屏幕上对应移动。比如人手在空间上向右移动,图标也跟随在屏幕上向右移动。
在体感交互系统激活状态下,通过3D红外传感器采集预定空间范围内人体部位的三维立体图像。3D红外传感器能够采集空间位置上物件的三维立体图像,所采集的三维立体图像包括物件所处的空间位置坐标以及空间运动轨迹。
本发明实施例所述的空间运动轨迹,包括人体部件的姿势以及人体部件的具体动作。比如用户在3D体感器前面做一个握拳的姿势并在空间范围内滑动,那么3D体感器采集该用户手部的三维立体图像,对该手部的三维立体图像进行特征提取,即获取到该手部距离3D传感器的三维坐标,以及该手部的握拳姿势和滑动的动作。其他三维立体图像的处理与此类似,本实施例不一一举例进行说明。
其中,本发明实施例所提到的人体部位可以是人手。当然也可以是其他的用于操作的人体部位比如人脸、人脚等等。
S102:将人体部位的三维立体图像进行特征提取获取特征参数,特征参数包括人体部位的三维坐标以及人体部位的空间运动轨迹;
将所采集的人体部位的三维立体图像进行特征提取,以获取该采集的三维立体空间的特征参数,其中,这些特征参数包括该人体部位所处的空间三维坐标以及人体部位的空间运动轨迹。通过特征提取,能够识别人体部位距离3D传感器的具体空间位置以及人体部位的动作。比如人手做的一个抓握的动作,通过采集该抓握的立体图像,并通过特征提取,就能根据该特征提取的参数确定这个人手的具体空间位置并识别出该动作为一个抓握的动作。
作为一种可能的实现方式,本发明对动作的识别之前,包括一个学习训练以建立一个训练数据库的过程。比如为识别一个人手抓握的动作,系统会采集各种不同的抓握动作的三维立体图像,对这些不同的抓握动作进行学习,以获取用于识别这个具体动作的具体特征参数。针对每个不同的动作,系统都会做这么一个学习训练过程,各种不同具体动作对应的具体特征参数,构成训练数据库。当系统获取到一个三维立体图像时,就会将该立体图像进行特征提取,到训练数据库中找到与之匹配的具体动作,以作为识别结果。
S103:根据人体部位的三维坐标,判断人体部位是否进入预定感应区域;
从提取的特征参数中的人体部位的三维坐标,判断人体部位是否进入预定感应区域。其中,这里的预定感应区域可以是预先设置的用于感应屏幕滚动动作并响应的一定空间范围。
其中,当人体部位进入预定感应区域时,屏幕显示相应的提示。作为一种可能的实现方式,可以将预定感应区域以高亮状态进行显示以提示用户可以在该预定感应区域内执行控制动作。当然,预定感应区域也可以以别的方式提示用户。比如突出显示,闪动显示、显示区域条纹等等。
当特征参数中的人体部位的三维坐标落入预定感应区域范围,则执行步骤S104。否则,不进行响应。
S104:控制屏幕根据人体部位的运动轨迹按照预定规则滚动;
当人体部位进入预定感应区域时,根据特征提取参数中人体部位的空间运动轨迹控制屏幕按照预定规则滚动。
其中,控制屏幕根据人体部位的空间运动轨迹按照预定规则滚动具体包括:将人体部位的空间运动轨迹与预存的控制屏幕滚动的空间运动轨迹进行匹配,当匹配达到预定匹配阈值时,按照预定规则控制屏幕滚动。
比如预先设置手掌向上推动的动作对应屏幕向上滚动,手掌向下推动的动作对应屏幕向下滚动,当采集到手掌向上推动的动作时即控制屏幕向上滚动,采集到手掌向下推动的动作即控制屏幕向下滚动。
当人体部位离开预定感应区域达到预定时间时,体感交互系统进入锁定状态,只有再次激活体感交互系统后才能进入体感控制控制屏幕滚动的交互。
为了进一步控制屏幕滚动的方向,将预定感应区域分为距离屏幕中心点预定距离的屏幕左侧感应区域、屏幕右侧感应区域、屏幕上侧感应区域或屏幕下侧感应区域,所述预定距离大于0。比如可以设置屏幕左侧预定面积以及屏幕右侧预定面积对应的空间区域为预定感应区域,只要人体部位进入屏幕左侧感应区域,并执行与预设的滚动屏幕相同的动作,则控制屏幕向屏幕左侧滚动,如果人体部位进入屏幕右侧感应区,则控制屏幕向屏幕右侧进行滚动。相对于设置其他区域作为感应区域的,具体实现类似,本实施例不一一进行举例说明。
其中,控制屏幕根据人体部位的运动轨迹按照预定规则滚动包括:控制屏幕根据人体部位的运动轨迹以连续或离散的方式向预定感应区域方向滚动。
其中,连续的方式是指屏幕内容以流畅方式滚动,内容停止于屏幕的任意位置,而离散的方式是指屏幕内容以离散方式滚动,每次滚动一整屏的内容,屏幕内容停留在固定位置。
只要人体部位没有离开预定感应区,屏幕会持续按照预定规则向预定感应区进行滚动直至没有可供显示的内容为止。如果中途人体部位离开预定感应区,则将滚动到当前屏幕节目的内容进行显示。
以浏览图片的屏幕界面为例,当采集到符合预定屏幕滚动的动作时,如果预设是连续的方式滚动屏幕,那么图片以流畅的方式向预定感应区域滚动,直至人体部位离开预定感应区时停止,并将滚动到当前屏幕的图片进行显示,如果人体部位没有离开预定感应区,屏幕将一直滚动直到没有图片可供显示为止。而如果预设的是离散的方式滚动屏幕,那么每次想预定感应区域滚动一整屏的图片并停留预定时间,再继续滚动下一整屏的图片,依次类推,如果人体部位离开预定感应区,则停留在当前屏幕上显示的界面,如果人体部位不离开预定感应区,则按照一整屏的图片依次滚动,直至没有可显示的图片为止。
当然,也可以不定义预定规则,通过预设的不同的动作对应不同的屏幕滚动方式。比如预设屏幕左侧预定空间区域范围以及屏幕右侧预定空间区域范围为预定感应区,预设手掌滑动的动作为控制屏幕滚动的动作,可以设定手掌每执行一次滑动动作对应进行一次屏幕离散的滚动,而手掌持续滑动的动作对应进行屏幕连续滚动。
请参阅图6,图6是本发明实施例提供的一种通过体感控制屏幕滚动的操作示意图,如图所示,在体感交互系统激活状态下,当人手进入右侧感应区域,如果人手一直停留在右侧感应区域,那么屏幕会以连续或离散的方式持续向右滚动,直至没有内容可供显示为止。如果期间人手离开右侧感应区域,停止滚动屏幕,屏幕将滚动到当前的内容进行显示。
请参阅图7,图7是本发明实施例提供的另一种通过体感控制屏幕滚动的操作示意图,如图所示在体感交互系统激活状态下,当人手进入左侧感应区域,如果人手一直停留在左侧感应区域,那么屏幕会以连续或离散的方式持续向左滚动,直至没有内容可供显示为止。如果期间人手离开右侧感应区域,停止滚屏,屏幕将滚动到当前的内容进行显示。
上述两个图只是示意性的,并且,在人手停留在预定感应区域,屏幕可以是按照预设往预定感应区域滚动,也可以是按照预设往预定感应区域相反的方向进行滚动,具体往哪个方向滚动可以根据用户需要设定。比如上图6的当人手进入右侧感应区域后,人手如果一直停留在右侧感应区域,那么屏幕可以连续或离散的方式向左滚动,而当图7的人手进入左侧感应区域,人手如果一直停留在左侧感应区域,那么屏幕可以连续或离散的方式向右滚动。针对设置上下感应区域的情况,屏幕滚动的体感操作类似,本发明不一一举例说明。
上述本发明实施例提供的体感控制屏幕滚动的方法,在激活体感交互系统的状态下,采集人体部位的三维立体图像,根据人体部位的三维立体图像进行特征提取获取特征参数,其中特征参数包括人体部位的三维坐标以及人体部位的运动轨迹,根据人体部位的三维坐标,判断人体部位是否进入预定感应区域,当人体部位进入预定感应区域时,控制屏幕根据人体部位的运动轨迹按照预定规则滚动。通过这样的方式,不需要依赖于外部输入设备,通过感应人体部位的空间动作,就能控制屏幕滚动,给用户更好的使用体验。
请参阅图3,图3是本发明实施例提供的一种体感交互系统的结构示意图,本实施例的体感交互系统用于执行上述图1所示实施例的体感控制屏幕滚动的方法,如图所示,本实施例的体感交互系统100包括采集模块11、特征提取模块12、判断模块13以及控制模块14,其中:
采集模块11用于,在体感交互系统激活状态下,采集人体部位的三维立体图像;
在本发明实施例中,需要通过体感控制参数调整时,需要先激活体感交互系统。
其中,请参阅图4,图4是本发明实施例提供的另一种体感交互系统的结构示意图,本实施例提供的体感交互系统包括与图3所示实施例提供的体感交互系统相同的功能模块之外,还进一步包括激活模块15,激活模块15用于控制激活体感交互系统。与图3所示相同的功能模块其具体功能实现也相同,具体请参见下述相关描述。
其中,请进一步参阅图5,图5是本发明实施例提供的激活模块的结构示意图,如图所示,激活模块15包括采集单元151、判断单元152、转化单元153以及激活单元154,其中:
采集单元151用于采集三维立体图像;
采集单元151通过3D传感器采集预定空间范围内的三维立体图像。所采集的三维立体图像包括3D传感器镜头监控范围的所有物体。比如3D传感器镜头前包括桌子、椅子以及人,那么所采集的三维立体图像包括所有的这些物件。
判断单元152用于对三维立体图像进行处理,判断三维立体图像是否包含用于激活体感交互系统的人体部位的三维立体图像,并将判断结果输出给转化单元153。
判断单元152对3D体感器采集的三维立体图形进行处理,判断该三维立体图像中是否包含用于激活体感交互系统的人体部位的三维立体图像。比如预设的用于激活体感交互系统的人体部位为人手,则从采集的三维立体图像中识别是否包括人手。
转化单元153用于对人体部位的三维立体图像进行处理,转化为激活指令。
其中,对人体部位的三维立体图像进行处理,转化为激活指令具体包括:将人体部位的三维立体图像进行特征提取获取特征参数,特征参数包括人体部位的三维坐标以及人体部位的空间运动轨迹,将特征参数与预存的激活体感交互系统的特征参数进行匹配,当匹配度达到预定阈值时,获取与预存的特征参数对应的指令,以作为激活指令。
将采集的人体部位的三维立体图像进行特征提取,获取得到特征参数,特征参数包括位置参数以及动作轨迹参数。位置参数即人体部位所处的空间位置,用三维坐标表示,运动轨迹即人体部位在空间上的运动轨迹。比如一个手掌抓握的动作,其参数提取即包括手掌当前所处的实际三维坐标X、Y、Z的具体数值,以确定手掌与3D传感器的具体位置关系。还包括手掌的空间运动轨迹即抓握的动作轨迹。
在提取得到特征参数以后,提取的特征参数与预存的用于激活体感交互系统的特征参数进行匹配。
比如通过一个手掌的动作来激活体感交互系统,预存的手掌的动作的特征参数包括A、B、C,当前采集到一个手掌的三维立体图像,提取得到的特征参数包括A'、B'、C',将A'、B'、C'与A、B、C进行匹配,并判断匹配度是否达到预定阈值。
当提取的特征参数与预存的用于激活体感交互系统的特征参数的匹配度达到预定阈值时,获取与预存的特征参数对应的指令,以作为激活指令。预定阈值是预先设置的匹配程度的值,比如可以设置预定阈值为80%,当匹配度达到80%或以上,即获取与预存的特征参数对应的指令,以作为激活指令。
作为一种优选的实现方案,在提取的特征参数与预存的用于激活体感交互系统的特征参数的匹配度达到预定阈值时,系统可以进一步判断所述三维立体图像中包含用于激活体感交互系统的人体部件的三维立体图像所持续的时间是否达到预定时间。在持续的时间达到预定时间时,才获取与预存的特征参数对应的指令,以作为激活指令。通过这样的方式,可以有效防止误触发激活体感交互系统。
比如预设的用于激活体感交行系统的人体部件为手掌。当3D体感器前面用户当前正与另一个人在聊天,过程中可能不经意的会做出一个手掌的动作,系统在采集并识别出这个手掌动作之后,进一步判断立体图像中包含该手掌是否持续预定时间,如果没有持续达到预定时间,可以判断到这只是一个误操作,则不会激活体感交互系统。其中,这里的预定时间可以根据需要预先设定,比如设定为10秒、30秒等等。
作为一种更进一步的优选方案,当系统采集并识别到人体部位且持续未达到预定时间之前,可以在屏幕上显示一个进度条,以提示体感交互系统激活状态。进度条可以实时的,以图片形式显示系统激活的速度,完成度,剩余未完成任务量的大小,和可能需要处理时间。作为一种可能的实现,进度条可以长方形条状显示。当进度条满时,即表示达到激活体感交互系统条件,激活体感交互系统。通过这样的方式,能够让用户对体感交互系统激活状态的有直观的了解,也能够让误操作的用户及时停止手势动作以避免误触发激活体感交互系统。
激活单元154用于根据激活指令激活体感交互系统。
根据获取的激活指令,激活体感交互系统,以进入体感交互状态。采集模块11在体感交互系统激活状态下,采集进入感应区域的人体部位的三维立体图像。
在体感交互系统激活状态下,通过3D红外传感器采集预定空间范围内人体部位的三维立体图像。3D红外传感器能够采集空间位置上物件的三维立体图像,所采集的三维立体图像包括物件所处的空间位置坐标以及空间运动轨迹。
其中,本发明实施例所提到的人体部位可以是人手。当然也可以是其他的用于操作的人体部位比如人脸、人脚等等。
特征提取模块12用于将人体部位的三维立体图像进行特征提取获取特征参数,特征参数包括人体部位的三维坐标以及人体部位的空间运动轨迹;
特征提取模块12将所采集的人体部位的三维立体图像进行特征提取,以获取该采集的三维立体空间的特征参数,其中,这些特征参数包括该人体部位所处的空间三维坐标以及人体部位的空间运动轨迹。通过特征提取,能够识别人体部位距离3D传感器的具体空间位置以及人体部位的动作。比如人手做的一个抓握的动作,通过采集该抓握的立体图像,并通过特征提取,就能根据该特征提取的参数确定这个人手的具体空间位置并识别出该动作为一个抓握的动作。
判断模块13用于根据人体部位的三维坐标,判断人体部位是否进入预定感应区域;
从提取的特征参数中的人体部位的三维坐标,判断人体部位是否进入预定感应区域。其中,这里的预定感应区域可以是预先设置的用于感应屏幕滚动动作并响应的一定空间范围。
控制模块14用于,当人体部位进入预定感应区域时,控制屏幕根据人体部位的运动轨迹按照预定规则滚动。
当人体部位进入预定感应区域时,控制模块14根据特征提取参数中人体部位的空间运动轨迹控制屏幕按照预定规则滚动。
其中,控制屏幕根据人体部位的空间运动轨迹按照预定规则滚动具体包括:将人体部位的空间运动轨迹与预存的控制屏幕滚动的空间运动轨迹进行匹配,当匹配达到预定匹配阈值时,按照预定规则控制屏幕滚动。
比如预先设置手掌向上推动的动作对应屏幕向上滚动,手掌向下推动的动作对应屏幕向下滚动,当采集到手掌向上推动的动作时即控制屏幕向上滚动,采集到手掌向下推动的动作即控制屏幕向下滚动。
当人体部位离开预定感应区域达到预定时间时,体感交互系统进入锁定状态,只有再次激活体感交互系统后才能进入体感控制控制屏幕滚动的交互。
为了进一步控制屏幕滚动的方向,将预定感应区域分为距离屏幕中心点预定距离的屏幕左侧感应区域、屏幕右侧感应区域、屏幕上侧感应区域或屏幕下侧感应区域,所述预定距离大于0。比如可以设置屏幕左侧预定面积以及屏幕右侧预定面积对应的空间区域为预定感应区域,只要人体部位进入屏幕左侧感应区域,并执行与预设的滚动屏幕相同的动作,则控制屏幕向屏幕左侧滚动,如果人体部位进入屏幕右侧感应区,则控制屏幕向屏幕右侧进行滚动。相对于设置其他区域作为感应区域的,具体实现类似,本实施例不一一进行举例说明。
其中,控制屏幕根据人体部位的运动轨迹按照预定规则滚动包括:控制屏幕根据人体部位的运动轨迹以连续或离散的方式向预定感应区域方向滚动。
其中,连续的方式是指屏幕内容以流畅方式滚动,内容停止于屏幕的任意位置,而离散的方式是指屏幕内容以离散方式滚动,每次滚动一整屏的内容,屏幕内容停留在固定位置。
只要人体部位没有离开预定感应区,屏幕会持续按照预定规则向预定感应区进行滚动直至没有可供显示的内容为止。如果中途人体部位离开预定感应区,则滚动到当前屏幕节目的内容进行显示。
以浏览图片的屏幕界面为例,当采集到符合预定屏幕滚动的动作时,如果预设是连续的方式滚动屏幕,那么图片以流畅的方式向预定感应区域滚动,直至人体部位离开预定感应区时停止,并将滚动到当前屏幕的图片进行显示,如果人体部位没有离开预定感应区,屏幕将一直滚动直到没有图片可供显示为止。而如果预设的是离散的方式滚动屏幕,那么每次想预定感应区域滚动一整屏的图片并停留预定时间,再继续滚动下一整屏的图片,依次类推,如果人体部位离开预定感应区,则停留在当前屏幕上显示的界面,如果人体部位不离开预定感应区,则按照一整屏的图片依次滚动,直至没有可显示的图片为止。
当然,也可以不定义预定规则,通过预设的不同的动作对应不同的屏幕滚动方式。比如预设屏幕左侧预定空间区域范围以及屏幕右侧预定空间区域范围为预定感应区,预设手掌滑动的动作为控制屏幕滚动的动作,可以设定手掌每执行一次滑动动作对应进行一次屏幕离散的滚动,而手掌持续滑动的动作对应进行屏幕连续滚动。
其中,请继续参阅图4,本实施例的体感交互系统还可以进一步包括显示模块16,显示模块16用于,当人体部位进入预定感应区域时,在屏幕上显示与人体部位同步移动的图标。
其中,与人体部位同步移动的图标可以是跟所述人体部位相似的图标,比如人体部位是人手,该图标可以是一个手形状的图标。当然,也可以是其他形式的图标,比如三角图标,圆点图标等。该屏幕上的图标跟随人体部位的移动而在屏幕上对应移动。比如人手在空间上向右移动,图标也跟随在屏幕上向右移动。
显示模块16还用于,当人体部位进入预定感应区域时,在屏幕上显示相应的提示。
其中,当人体部位进入预定感应区域时,作为一种可能的实现方式,显示模块16可以将预定感应区域以高亮状态进行显示以提示用户可以在该预定感应区域内执行控制动作。当然,预定感应区域也可以以别的方式提示用户。比如突出显示,闪动显示或显示区域条纹等等。
在本发明实施例所提供的体感交互系统的基础上,本发明实施例进一步提供一种电子设备,该电子设备包括上述实施例所述的体感交互系统。其中,电子设备可以但不限于是智能电视、智能手机、平板电脑、笔记本电脑等。
上述本发明实施例提供的体感交互系统,在激活体感交互系统的状态下,采集人体部位的三维立体图像,根据人体部位的三维立体图像进行特征提取获取特征参数,其中特征参数包括人体部位的三维坐标以及人体部位的运动轨迹,根据人体部位的三维坐标,判断人体部位是否进入预定感应区域,当人体部位进入预定感应区域时,控制屏幕根据人体部位的运动轨迹按照预定规则滚动。通过这样的方式,不需要依赖于外部输入设备,通过感应人体部位的空间动作,就能控制屏幕滚动,给用户更好的使用体验。
在本发明所提供的几个实施例中,应该理解到,所揭露的系统,装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本发明各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)或处理器(processor)执行本发明各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述仅为本发明的实施例,并非因此限制本发明的专利范围,凡是利用本发明说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本发明的专利保护范围内。

Claims (13)

  1. 一种体感控制屏幕滚动方法,其特征在于,所述方法包括:
    在体感交互系统激活状态下,采集人体部位的三维立体图像;
    将所述人体部位的三维立体图像进行特征提取获取特征参数,所述特征参数包括所述人体部位的三维坐标以及所述人体部位的运动轨迹;
    根据所述人体部位的三维坐标,判断所述人体部位是否进入预定感应区域;
    当所述人体部位进入预定感应区域时,控制屏幕根据所述人体部位的运动轨迹按照预定规则滚动。
  2. 根据权利要求1所述的方法,其特征在于,所述预定感应区域分为距离屏幕中心点预定距离的屏幕左侧感应区域、屏幕右侧感应区域、屏幕上侧感应区域以及屏幕下侧感应区域,所述预定距离大于0;
    所述控制屏幕根据所述人体部位的运动轨迹按照预定规则滚动包括:
    控制屏幕根据所述人体部位的运动轨迹以连续或离散的方式向所述预定感应区域方向滚动。
  3. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    当所述人体部位进入预定感应区域时,在屏幕上显示与所述人体部位同步移动的图标。
  4. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    当所述人体部位进入预定感应区域时,屏幕显示相应的提示。
  5. 根据权利要求1所述的方法,其特征在于,所述人体部位为人手。
  6. 一种体感交互系统,其特征在于,所述系统包括采集模块、特征提取模块、判断模块以及控制模块,其中:
    所述采集模块用于在体感交互系统激活状态下,采集人体部位的三维立体图像;
    所述特征提取模块用于将所述人体部位的三维立体图像进行特征提取获取特征参数,所述特征参数包括所述人体部位的三维坐标以及所述人体部位的运动轨迹;
    所述判断模块用于根据所述人体部位的三维坐标,判断所述人体部位是否进入预定感应区域;
    所述控制模块用于,当所述人体部位进入预定感应区域时,控制屏幕根据所述人体部位的运动轨迹按照预定规则滚动。
  7. 根据权利要求6所述的系统,其特征在于,所述预定感应区域分为距离屏幕中心点预定距离的屏幕左侧感应区域、屏幕右侧感应区域、屏幕上侧感应区域以及屏幕下侧感应区域,所述预定距离大于0;所述控制模块用于控制屏幕根据所述人体部位的运动轨迹以连续或离散的方式向所述预定感应区域方向滚动。
  8. 根据权利要求6所述的系统,其特征在于,所述系统还包括显示模块,所述显示模块用于当所述人体部位进入预定感应区域时,在屏幕上显示与所述人体部位同步移动的图标。
  9. 根据权利要求6所述的系统,其特征在于,所述显示模块还用于,当所述人体部位进入预定感应区域时,在屏幕上显示相应的提示。
  10. 一种电子设备,其特征在于,所述电子设备体感交互系统,所述体感交互系统包括采集模块、特征提取模块、判断模块以及控制模块,其中:
    所述采集模块用于在体感交互系统激活状态下,采集人体部位的三维立体图像;
    所述特征提取模块用于将所述人体部位的三维立体图像进行特征提取获取特征参数,所述特征参数包括所述人体部位的三维坐标以及所述人体部位的运动轨迹;
    所述判断模块用于根据所述人体部位的三维坐标,判断所述人体部位是否进入预定感应区域;
    所述控制模块用于,当所述人体部位进入预定感应区域时,控制屏幕根据所述人体部位的运动轨迹按照预定规则滚动。
  11. 根据权利要求10所述的电子设备,其特征在于,所述预定感应区域分为距离屏幕中心点预定距离的屏幕左侧感应区域、屏幕右侧感应区域、屏幕上侧感应区域以及屏幕下侧感应区域,所述预定距离大于0;所述控制模块用于控制屏幕根据所述人体部位的运动轨迹以连续或离散的方式向所述预定感应区域方向滚动。
  12. 根据权利要求10所述的电子设备,其特征在于,所述系统还包括显示模块,所述显示模块用于当所述人体部位进入预定感应区域时,在屏幕上显示与所述人体部位同步移动的图标。
  13. 根据权利要求10所述的电子设备,其特征在于,所述显示模块还用于,当所述人体部位进入预定感应区域时,在屏幕上显示相应的提示。
PCT/CN2016/076776 2015-06-05 2016-03-18 一种体感控制屏幕滚动方法、体感交互系统及电子设备 WO2016192439A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201510307214.8 2015-06-05
CN201510307214.8A CN104915004A (zh) 2015-05-29 2015-06-05 一种体感控制屏幕滚动方法、体感交互系统及电子设备

Publications (1)

Publication Number Publication Date
WO2016192439A1 true WO2016192439A1 (zh) 2016-12-08

Family

ID=57445763

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/076776 WO2016192439A1 (zh) 2015-06-05 2016-03-18 一种体感控制屏幕滚动方法、体感交互系统及电子设备

Country Status (1)

Country Link
WO (1) WO2016192439A1 (zh)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120204133A1 (en) * 2009-01-13 2012-08-09 Primesense Ltd. Gesture-Based User Interface
CN103488296A (zh) * 2013-09-25 2014-01-01 华为软件技术有限公司 体感交互手势控制方法及装置
CN103809846A (zh) * 2012-11-13 2014-05-21 联想(北京)有限公司 一种功能调用方法及电子设备
CN104915004A (zh) * 2015-05-29 2015-09-16 深圳奥比中光科技有限公司 一种体感控制屏幕滚动方法、体感交互系统及电子设备

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120204133A1 (en) * 2009-01-13 2012-08-09 Primesense Ltd. Gesture-Based User Interface
CN103809846A (zh) * 2012-11-13 2014-05-21 联想(北京)有限公司 一种功能调用方法及电子设备
CN103488296A (zh) * 2013-09-25 2014-01-01 华为软件技术有限公司 体感交互手势控制方法及装置
CN104915004A (zh) * 2015-05-29 2015-09-16 深圳奥比中光科技有限公司 一种体感控制屏幕滚动方法、体感交互系统及电子设备

Similar Documents

Publication Publication Date Title
WO2016192438A1 (zh) 一种体感交互系统激活方法、体感交互方法及系统
CN102915111B (zh) 一种腕上手势操控系统和方法
WO2017118075A1 (zh) 人机交互系统、方法及装置
US10095033B2 (en) Multimodal interaction with near-to-eye display
TWI489317B (zh) 電子裝置的操作方法及系統
WO2013183938A1 (ko) 공간상의 위치인식을 통한 사용자인터페이스 방법 및 장치
WO2014129862A1 (en) Method for controlling display of multiple objects depending on input related to operation of mobile terminal, and mobile terminal therefor
WO2016107231A1 (zh) 一种3d场景中输入手势的系统和方法
WO2017119745A1 (en) Electronic device and control method thereof
WO2017126741A1 (ko) Hmd 디바이스 및 그 제어 방법
TW200945174A (en) Vision based pointing device emulation
CN102270037B (zh) 徒手人机界面操作系统及其方法
WO2018076912A1 (zh) 一种虚拟场景调整方法及头戴式智能设备
CN103105930A (zh) 一种基于视频图像的非接触式智能输入方法及装置
CN106325517A (zh) 一种基于虚拟现实的目标对象触发方法、系统和穿戴设备
WO2017211056A1 (zh) 一种移动终端的单手操作方法、及系统
JP2004078977A (ja) インターフェイス装置
WO2019004686A1 (ko) 손가락 동작 인식을 이용한 키보드 입력 시스템 및 키보드 입력 방법
CN111656313A (zh) 屏幕显示切换方法、显示设备、可移动平台
CN102929547A (zh) 智能终端无接触交互方法
CN112817443A (zh) 基于手势的显示界面控制方法、装置、设备及存储介质
CN104915004A (zh) 一种体感控制屏幕滚动方法、体感交互系统及电子设备
WO2020159302A1 (ko) 증강 현실 환경에서 다양한 기능을 수행하는 전자 장치 및 그 동작 방법
CN105630134A (zh) 识别操作事件的方法和装置
JPH0648458B2 (ja) 情報入力装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16802362

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16802362

Country of ref document: EP

Kind code of ref document: A1