WO2016192440A1 - Motion sensing control parameter adjustment method, motion sensing interaction system and electronic device - Google Patents

Motion sensing control parameter adjustment method, motion sensing interaction system and electronic device Download PDF

Info

Publication number
WO2016192440A1
WO2016192440A1 PCT/CN2016/076777 CN2016076777W WO2016192440A1 WO 2016192440 A1 WO2016192440 A1 WO 2016192440A1 CN 2016076777 W CN2016076777 W CN 2016076777W WO 2016192440 A1 WO2016192440 A1 WO 2016192440A1
Authority
WO
WIPO (PCT)
Prior art keywords
body part
human body
screen
sensing area
module
Prior art date
Application number
PCT/CN2016/076777
Other languages
French (fr)
Chinese (zh)
Inventor
黄源浩
肖振中
钟亮洪
许宏淮
林靖雄
Original Assignee
深圳奥比中光科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN201510307213.3A external-priority patent/CN104915003A/en
Application filed by 深圳奥比中光科技有限公司 filed Critical 深圳奥比中光科技有限公司
Publication of WO2016192440A1 publication Critical patent/WO2016192440A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer

Definitions

  • the invention relates to the field of human-computer interaction, and particularly relates to a method for sensing a screen scrolling, a somatosensory interaction system and an electronic device.
  • Human-computer interaction technology refers to the technology of realizing human-machine dialogue in an efficient manner through input and output devices.
  • the existing interaction mode of human-computer interaction usually interacts with the machine system through external devices such as a mouse, a keyboard, a touch screen or a handle, and the machine system responds accordingly.
  • the volume adjustment is implemented by a mouse drag, or the volume adjustment is performed by sliding a human body part such as a finger on the touch screen.
  • the existing control parameter adjustment methods must be in direct contact with input devices such as a mouse and a touch screen to complete the parameter adjustment operation, so that the user control parameter adjustment operation must rely on the external device to restrain the user's behavior, and the specific implementation does not appear. Natural, not true.
  • the technical problem to be solved by the present invention is to provide a method for adjusting body shape control parameters, a somatosensory interaction system and an electronic device, which can control parameter adjustment by sensing the action of a human body part without relying on an external input device.
  • a technical solution adopted by the present invention is to provide a method for adjusting a body feeling control parameter, the method comprising: collecting a three-dimensional stereoscopic image of a human body part in an activated state of the somatosensory interaction system; Feature extraction of the three-dimensional image of the part to obtain feature parameters, the feature parameter includes three-dimensional coordinates of the human body part and a spatial motion track of the human body part; and determining whether the human body part enters according to the three-dimensional coordinates of the human body part
  • the sensing area is predetermined; when the body part enters the predetermined sensing area, the parameter adjustment is performed according to the spatial motion trajectory of the body part.
  • the parameter includes at least one of volume, brightness, and screen scrolling speed.
  • the method further comprises: when the human body part enters the predetermined sensing area, displaying an icon moving synchronously with the human body part on the screen.
  • the method further comprises: when the human body part enters the predetermined sensing area, the screen displays a corresponding prompt.
  • the human body part is a human hand.
  • the system includes an acquisition module, a feature extraction module, a judgment module, and a control module, wherein: the acquisition module is used for the sense of body Acquiring a three-dimensional image of a human body part in an active state; the feature extraction module is configured to perform feature extraction on the three-dimensional image of the human body part to obtain a feature parameter, where the feature parameter includes a three-dimensional coordinate of the human body part and a spatial motion trajectory of the human body part; the determining module is configured to determine whether the human body part enters a predetermined sensing area according to the three-dimensional coordinates of the human body part; and the control module is configured to: when the human body part enters a predetermined sensing In the region, the control performs parameter adjustment according to the spatial motion trajectory of the human body part.
  • the parameter includes at least one of volume, brightness, and screen scrolling speed.
  • the system further includes a display module, and the display module is configured to display an icon moving synchronously with the human body part on the screen when the human body part enters the predetermined sensing area.
  • the display module is further configured to display a corresponding prompt on the screen when the human body part enters the predetermined sensing area.
  • the present invention provides a technical solution for providing an electronic device, the electronic device somatosensory interaction system, the somatosensory interaction system includes an acquisition module, a feature extraction module, a judgment module, and a control module.
  • the acquiring module is configured to collect a three-dimensional stereoscopic image of a human body part in an activated state of the somatosensory interaction system; the feature extraction module is configured to perform feature extraction on the three-dimensional stereoscopic image of the human body part to obtain a feature parameter, where the feature The parameter includes a three-dimensional coordinate of the human body part and a spatial motion trajectory of the human body part; the determining module is configured to determine whether the human body part enters a predetermined sensing area according to the three-dimensional coordinates of the human body part; When the body part enters the predetermined sensing area, the parameter adjustment is performed according to the spatial motion trajectory of the body part.
  • the parameter includes at least one of volume, brightness, and screen scrolling speed.
  • the system further includes a display module, and the display module is configured to display an icon moving synchronously with the human body part on the screen when the human body part enters the predetermined sensing area.
  • the display module is further configured to display a corresponding prompt on the screen when the human body part enters the predetermined sensing area.
  • the invention has the beneficial effects that the three-dimensional stereoscopic image of the human body part is collected in the state of activating the somatosensory interaction system, and the feature extraction is performed according to the three-dimensional stereoscopic image of the human body part to obtain the characteristic parameter, wherein the feature is different from the prior art.
  • the parameters include the three-dimensional coordinates of the human body part and the spatial motion trajectory of the human body part. According to the three-dimensional coordinates of the human body part, it is determined whether the human body part enters the predetermined sensing area, and when the human body part enters the predetermined sensing area, the parameter is controlled according to the spatial motion trajectory of the human body part. Adjustment. In this way, it is not necessary to rely on the external input device, and the parameter adjustment can be controlled by sensing the spatial action of the human body part, thereby giving the user a better use experience.
  • FIG. 1 is a flow chart of a method for adjusting a somatosensory control parameter provided by an implementation of the present invention
  • FIG. 2 is a flowchart of an activated somatosensory interaction system according to an embodiment of the present invention
  • FIG. 3 is a schematic structural diagram of a somatosensory interaction system according to an embodiment of the present invention.
  • FIG. 4 is a schematic structural diagram of another somatosensory interaction system according to an embodiment of the present invention.
  • FIG. 5 is a schematic structural diagram of an activation module in a somatosensory interaction system according to an embodiment of the present invention
  • FIG. 6 is a schematic diagram of an operation of adjusting a screen scroll speed by a somatosensory control according to an embodiment of the present invention
  • FIG. 7 is a schematic diagram of operation of controlling brightness of a screen by a somatosensory control according to an embodiment of the present invention.
  • FIG. 8 is a schematic diagram of an operation of controlling volume adjustment by a sense of body according to an embodiment of the present invention.
  • FIG. 1 is a method for adjusting a somatosensory control parameter according to an embodiment of the present invention. As shown in the figure, a method for adjusting a somatosensory control parameter of the embodiment includes:
  • S101 collecting a three-dimensional stereoscopic image of a human body part in an activated state of the somatosensory interaction system
  • the somatosensory interaction system needs to be activated before the somatosensory interaction needs to be performed.
  • FIG. 2 is a flowchart of an activated somatosensory interaction system according to an embodiment of the present invention. As shown in the figure, the activation of the somatosensory interaction system includes the following steps:
  • a three-dimensional stereoscopic image within a predetermined spatial range is acquired by a 3D sensor.
  • the acquired three-dimensional stereoscopic image includes all objects of the 3D sensor lens monitoring range.
  • the 3D sensor lens includes a table, a chair, and a person, and the acquired three-dimensional image includes all of these objects.
  • S202 processing the three-dimensional stereoscopic image, determining whether the three-dimensional stereoscopic image includes a three-dimensional stereoscopic image for activating a human body part of the somatosensory interaction system;
  • the three-dimensional stereoscopic image acquired by the 3D sensor is processed to determine whether the three-dimensional stereoscopic image includes a three-dimensional stereoscopic image for activating a human body part of the somatosensory interaction system. For example, if the preset human body part for activating the somatosensory interaction system is a human hand, it is recognized whether the human hand is included from the acquired three-dimensional stereoscopic image. If a three-dimensional stereoscopic image for activating a human body part of the somatosensory interaction system is included in the three-dimensional stereoscopic image, step S203 is performed.
  • S203 processing a three-dimensional stereoscopic image of a human body part, and converting into an activation instruction
  • the processing of the three-dimensional image of the human body part into the activation instruction specifically includes: extracting the feature parameters of the three-dimensional image of the human body part, and the characteristic parameters include the three-dimensional coordinates of the human body part and the spatial motion track of the human body part,
  • the feature parameter is matched with the feature parameter of the pre-stored activated somatosensory interaction system.
  • an instruction corresponding to the pre-stored feature parameter is acquired as an activation command.
  • the three-dimensional images of the collected human body parts are extracted, and the feature parameters are obtained, and the feature parameters include position parameters and motion track parameters.
  • the positional parameter that is, the spatial position of the human body part
  • the positional parameter is represented by three-dimensional coordinates, that is, the motion trajectory of the human body part in space.
  • the parameter extraction includes the actual three-dimensional coordinates X, Y, Z of the palm of the hand to determine the specific positional relationship between the palm and the 3D sensor. It also includes the spatial motion trajectory of the palm, that is, the motion trajectory of the grip.
  • the extracted feature parameters are matched with the pre-stored feature parameters for activating the somatosensory interaction system.
  • the somatosensory interaction system is activated through a palm motion.
  • the characteristic parameters of the pre-stored palm motion include A, B, and C.
  • a three-dimensional image of a palm is collected, and the extracted feature parameters include A', B', and C.
  • the extracted feature parameters include A', B', and C.
  • ', A', B', C' are matched with A, B, C, and it is judged whether the matching degree reaches a predetermined threshold.
  • the predetermined threshold is a value of a preset matching degree.
  • the predetermined threshold may be set to 80%.
  • the system may further determine that the three-dimensional stereoscopic image includes a system for activating the somatosensory interaction system. Whether the three-dimensional image of the human body part lasts for a predetermined time. When the predetermined time reaches the predetermined time, the instruction corresponding to the pre-stored feature parameter is acquired as the activation command. In this way, it is possible to effectively prevent the false trigger from activating the somatosensory interaction system.
  • the preset body part for activating the somatosensory intersection system is the palm of the hand.
  • a palm motion may be inadvertently made. After the system collects and recognizes the palm motion, it further determines whether the palm image contains the palm or not. Time, if it does not continue to reach the scheduled time, it can be judged that this is only a misoperation, then the somatosensory interaction system will not be activated.
  • the predetermined time here can be preset according to needs, for example, set to 10 seconds, 30 seconds, and the like.
  • a progress bar may be displayed on the screen to prompt the somatosensory interaction system activation state when the system collects and recognizes the human body part and does not reach the predetermined time.
  • the progress bar can display the speed of the system activation, the degree of completion, the amount of remaining unfinished tasks, and the processing time that may be required in real time.
  • the progress bar can be displayed in a rectangular strip. When the progress bar is full, it means that the somatosensory interaction system condition is activated and the somatosensory interaction system is activated. In this way, the user can have an intuitive understanding of the activation state of the somatosensory interaction system, and can also cause the user who misunderstands to stop the gesture action in time to avoid false triggering of the activation of the somatosensory interaction system.
  • the somatosensory interaction system is activated to enter the somatosensory interaction state.
  • the user may be prompted to activate the corresponding somatosensory interaction system.
  • it can be prompted by displaying the predetermined area of the screen in a highlighted state.
  • the predetermined area here may be a preset body sensing area corresponding to a plane area on the screen, such as an area of a certain area on the left side of the screen, or an area of a certain area on the right side of the screen. Of course, it can also be the entire screen.
  • the user may also be prompted by other means, such as by popping up a prompt that the somatosensory interaction system has been activated, or by a voice prompt, etc., which is not limited by the present invention.
  • an icon that moves in synchronization with the human body part is displayed on the screen.
  • the icon that moves synchronously with the human body part may be an icon similar to the human body part, for example, the human body part is a human hand, and the icon may be a hand shape icon. Of course, it can be other forms of icons, such as triangle icons, dot icons, and so on.
  • the icon on the screen follows the movement of the body part and moves correspondingly on the screen. For example, the human hand moves to the right in space, and the icon also moves to the right along the screen.
  • a three-dimensional stereoscopic image of the human body part within a predetermined spatial range is acquired by the 3D infrared sensor.
  • the 3D infrared sensor can collect a three-dimensional image of the object in the spatial position, and the acquired three-dimensional image includes the spatial position coordinates of the object and the spatial motion track.
  • the spatial motion trajectory described in the embodiments of the present invention includes the posture of the human body component and the specific action of the human body component. For example, if the user makes a fist gesture in front of the 3D body sensor and slides in the space range, the 3D body sensor collects the three-dimensional image of the user's hand, and extracts the feature of the three-dimensional image of the hand, that is, the hand is obtained. The distance from the 3D sensor to the 3D coordinates of the 3D sensor, as well as the gripping and sliding movements of the hand. The processing of other three-dimensional stereoscopic images is similar to this, and the present embodiment will not be described by way of example.
  • the human body part mentioned in the embodiment of the present invention may be a human hand. Of course, it can also be other human body parts for operation such as a human face, a human foot, and the like.
  • S102 Perform feature extraction on a three-dimensional image of a human body part to obtain a feature parameter, where the feature parameter includes a three-dimensional coordinate of the human body part and a spatial motion track of the human body part;
  • the three-dimensional image of the collected human body part is subjected to feature extraction to obtain the feature parameters of the acquired three-dimensional space, wherein the feature parameters include the spatial three-dimensional coordinates of the human body part and the spatial motion track of the human body part.
  • feature extraction it is possible to recognize the specific spatial position of the body part from the 3D sensor and the action of the body part. For example, a gripping action by a human hand, by collecting the stereoscopic image of the grip, and by feature extraction, can determine the specific spatial position of the human hand according to the parameters extracted by the feature and recognize that the motion is a gripping action. .
  • the present invention includes a process of learning training to establish a training database. For example, in order to recognize the movement of a human hand, the system collects three-dimensional images of various gripping actions, and learns these different gripping actions to obtain specific feature parameters for identifying this specific action. For each different action, the system will do such a learning training process, and the specific characteristic parameters corresponding to various specific actions constitute a training database. When the system acquires a three-dimensional stereo image, the stereo image is extracted from the feature, and a specific action matching the same is found in the training database as a recognition result.
  • S103 determining, according to the three-dimensional coordinates of the human body part, whether the human body part enters the predetermined sensing area;
  • the predetermined sensing area here may be a predetermined spatial range.
  • the spatial area corresponding to the predetermined area on the left side of the screen is set as the predetermined sensing area, and the action of the spatial range outside the predetermined sensing area does not respond.
  • different predetermined sensing regions may be respectively set to correspond to different parameter adjustments, such as setting a predetermined spatial range on the upper side of the screen corresponding to the sensing and responding to the brightness adjustment, and the predetermined space on the lower side of the screen.
  • the range corresponds to the sense and responds to volume adjustments and more.
  • the setting may not be made.
  • the spatial range in which the stereoscopic image can be acquired by the 3D sensor is a predetermined sensing area, and different operations are performed only according to different motion responses.
  • the predetermined sensing area when the human body part enters the predetermined sensing area, a corresponding prompt is displayed on the screen.
  • the predetermined sensing area may be displayed in a highlighted state to prompt the user to perform a control action within the predetermined sensing area.
  • the predetermined sensing area can also prompt the user in other ways. Such as highlighting, flashing display or display stripes and so on.
  • step S104 is performed. Otherwise, no response is made.
  • the parameter adjustment is performed according to the spatial motion trajectory of the human body part in the feature extraction parameter.
  • the parameter adjustment herein may be, but is not limited to, at least one of volume adjustment, screen brightness adjustment, and screen scroll speed adjustment.
  • the parameter adjustment according to the spatial motion trajectory of the human body part specifically includes: matching the spatial motion trajectory of the human body part with the pre-stored spatial motion trajectory, and when the matching reaches the predetermined matching threshold value, controlling the parameter corresponding to the pre-stored spatial motion trajectory is controlled. Adjustment.
  • the action of pushing the palm up is correspondingly to increase the volume
  • the action of pushing the palm downward corresponds to the volume of the low volume.
  • the somatosensory interaction system When the body part leaves the predetermined sensing area, the somatosensory interaction system enters the locked state, and the interaction of the somatosensory control parameter adjustment can only be entered after the somatosensory interaction system is activated again.
  • different parameters of different sensing regions may be preset, and different actions of different sensing regions may be adjusted corresponding to different parameters. For example, setting the area on the left side of the screen to the sensing area, the palm up motion control volume is adjusted, the palm down motion control volume is small, the screen on the right side of the screen, the sensing area, the fist movement, the upward movement, the screen brightness is increased, and the fist movement is moved downward to control the screen brightness adjustment. small.
  • the palm is detected in the predetermined sensing area on the left side of the screen and the palm moves upward, the volume is turned up, and the palm down motion control volume is turned down.
  • the upward movement control screen is brightened, and when the gripping action is detected, the downward movement control screen is darkened.
  • the action of clenching is detected in a predetermined area on the left side of the screen, or when the palm is detected in a predetermined area on the right side of the screen, no response is made.
  • the range of the 3D body sensor capable of acquiring the stereoscopic image is the sensing area, as long as the human body part enters the sensing area and senses the action, And matching with the preset action reaches a predetermined threshold, that is, performing an operation corresponding to the action.
  • FIG. 6 is a schematic diagram of the operation of controlling the scrolling speed of the screen by the sense of body according to the embodiment of the present invention.
  • the control screen scrolls faster, and when the human hand slides to the right, the screen scrolling speed is controlled. slow.
  • FIG. 7 is a schematic diagram of the operation of adjusting the brightness of the screen through the somatosensory control according to the embodiment of the present invention.
  • the first schematic diagram illustrates the motion control screen brightness reduction by moving the fist with the right hand.
  • the schematic diagram shows that the action control screen moved to the middle by the right hand fist is displayed in normal brightness
  • the third schematic diagram shows that the brightness of the screen is controlled by a certain action by the right hand fist.
  • FIG. 8 is a schematic diagram of the operation of controlling the volume adjustment by the somatosensory according to the embodiment of the present invention.
  • the first schematic diagram illustrates the downward control of the volume by the left hand to control the volume reduction.
  • the action of moving to the middle by the left hand fist is controlled to the normal volume
  • the third schematic diagram shows that the volume of the upward movement by the left hand is controlled to increase the volume.
  • the invention collects a three-dimensional stereoscopic image of a human body part in a state of activating a somatosensory interaction system, and extracts feature parameters according to a three-dimensional stereoscopic image of the human body part, wherein the characteristic parameters include a three-dimensional coordinate of the human body part and a spatial motion track of the human body part, According to the three-dimensional coordinates of the human body part, it is determined whether the human body part enters the predetermined sensing area, and when the human body part enters the predetermined sensing area, the parameter adjustment is performed according to the spatial motion trajectory control of the human body part. In this way, it is not necessary to rely on the external input device, and the parameter adjustment can be controlled by sensing the spatial action of the human body part, thereby giving the user a better use experience.
  • FIG. 3 is a schematic structural diagram of a somatosensory interaction system according to an embodiment of the present invention.
  • the somatosensory interaction system of the embodiment is used to perform the method for adjusting the somatosensory control parameters of the embodiment shown in FIG.
  • the somatosensory interaction system 100 of the present embodiment includes an acquisition module 11, a feature extraction module 12, a determination module 13, and a control module 14, wherein:
  • the collecting module 11 is configured to collect a three-dimensional stereoscopic image of a human body part in an activated state of the somatosensory interaction system
  • the somatosensory interaction system needs to be activated first.
  • FIG. 4 is a schematic structural diagram of another somatosensory interaction system according to an embodiment of the present invention.
  • the somatosensory interaction system provided in this embodiment includes the same function as the somatosensory interaction system provided in the embodiment shown in FIG.
  • an activation module 15 is further included for controlling the activation of the somatosensory interaction system.
  • the same function modules as those shown in Figure 3 have the same specific function implementation. For details, please refer to the related description below.
  • FIG. 5 is a schematic structural diagram of an activation module according to an embodiment of the present invention.
  • the activation module 15 includes an acquisition unit 151, a determination unit 152, a conversion unit 153, and an activation unit 154, where:
  • the collecting unit 151 is configured to collect a three-dimensional stereoscopic image
  • the acquisition unit 151 collects a three-dimensional stereoscopic image within a predetermined spatial range through a 3D sensor.
  • the acquired three-dimensional stereoscopic image includes all objects of the 3D sensor lens monitoring range.
  • the 3D sensor lens includes a table, a chair, and a person, and the acquired three-dimensional image includes all of these objects.
  • the determining unit 152 is configured to process the three-dimensional stereoscopic image, determine whether the three-dimensional stereoscopic image includes a three-dimensional stereoscopic image for activating a human body part of the somatosensory interactive system, and output the determination result to the conversion unit 153.
  • the determining unit 152 processes the three-dimensional stereoscopic image acquired by the 3D sensor, and determines whether the three-dimensional stereoscopic image includes a three-dimensional stereoscopic image for activating a human body part of the somatosensory interactive system. For example, if the preset human body part for activating the somatosensory interaction system is a human hand, it is recognized whether the human hand is included from the acquired three-dimensional stereoscopic image.
  • the conversion unit 153 is configured to process the three-dimensional stereoscopic image of the human body part and convert it into an activation command.
  • the processing of the three-dimensional image of the human body part into the activation instruction specifically includes: extracting the feature parameters of the three-dimensional image of the human body part, and the characteristic parameters include the three-dimensional coordinates of the human body part and the spatial motion track of the human body part,
  • the feature parameter is matched with the feature parameter of the pre-stored activated somatosensory interaction system.
  • an instruction corresponding to the pre-stored feature parameter is acquired as an activation command.
  • the three-dimensional images of the collected human body parts are extracted, and the feature parameters are obtained, and the feature parameters include position parameters and motion track parameters.
  • the positional parameter that is, the spatial position of the human body part
  • the positional parameter is represented by three-dimensional coordinates, that is, the motion trajectory of the human body part in space.
  • the parameter extraction includes the actual three-dimensional coordinates X, Y, Z of the palm of the hand to determine the specific positional relationship between the palm and the 3D sensor. It also includes the spatial motion trajectory of the palm, that is, the motion trajectory of the grip.
  • the extracted feature parameters are matched with the pre-stored feature parameters for activating the somatosensory interaction system.
  • the somatosensory interaction system is activated through a palm motion.
  • the characteristic parameters of the pre-stored palm motion include A, B, and C.
  • a three-dimensional image of a palm is collected, and the extracted feature parameters include A', B', and C.
  • the extracted feature parameters include A', B', and C.
  • ', A', B', C' are matched with A, B, C, and it is judged whether the matching degree reaches a predetermined threshold.
  • the predetermined threshold is a value of a preset matching degree.
  • the predetermined threshold may be set to 80%.
  • the system may further determine that the three-dimensional stereoscopic image includes a system for activating the somatosensory interaction system. Whether the three-dimensional image of the human body part lasts for a predetermined time. When the predetermined time reaches the predetermined time, the instruction corresponding to the pre-stored feature parameter is acquired as the activation command. In this way, it is possible to effectively prevent the false trigger from activating the somatosensory interaction system.
  • the preset body part for activating the somatosensory intersection system is the palm of the hand.
  • a palm motion may be inadvertently made. After the system collects and recognizes the palm motion, it further determines whether the palm image contains the palm or not. Time, if it does not continue to reach the scheduled time, it can be judged that this is only a misoperation, then the somatosensory interaction system will not be activated.
  • the predetermined time here can be preset according to needs, for example, set to 10 seconds, 30 seconds, and the like.
  • a progress bar may be displayed on the screen to prompt the somatosensory interaction system activation state when the system collects and recognizes the human body part and does not reach the predetermined time.
  • the progress bar can display the speed of the system activation, the degree of completion, the amount of remaining unfinished tasks, and the processing time that may be required in real time.
  • the progress bar can be displayed in a rectangular strip. When the progress bar is full, it means that the somatosensory interaction system condition is activated and the somatosensory interaction system is activated. In this way, the user can have an intuitive understanding of the activation state of the somatosensory interaction system, and can also cause the user who misunderstands to stop the gesture action in time to avoid false triggering of the activation of the somatosensory interaction system.
  • the activation unit 154 is configured to activate the somatosensory interaction system in accordance with the activation instruction.
  • the somatosensory interaction system is activated to enter the somatosensory interaction state.
  • the acquisition module 11 collects a three-dimensional stereoscopic image of a human body part entering the sensing area under the activation state of the somatosensory interaction system.
  • a three-dimensional stereoscopic image of the human body part within a predetermined spatial range is acquired by the 3D infrared sensor.
  • the 3D infrared sensor can collect a three-dimensional image of the object in the spatial position, and the acquired three-dimensional image includes the spatial position coordinates of the object and the spatial motion track.
  • the human body part mentioned in the embodiment of the present invention may be a human hand. Of course, it can also be other human body parts for operation such as a human face, a human foot, and the like.
  • the feature extraction module 12 is configured to perform feature extraction on the three-dimensional image of the human body part to acquire feature parameters, where the feature parameters include three-dimensional coordinates of the human body part and spatial motion trajectories of the human body part;
  • the feature extraction module 12 performs feature extraction on the collected three-dimensional image of the human body part to obtain the feature parameters of the acquired three-dimensional space, wherein the feature parameters include the spatial three-dimensional coordinates of the human body part and the space of the human body part. Movement track.
  • feature extraction it is possible to recognize the specific spatial position of the body part from the 3D sensor and the action of the body part. For example, a gripping action by a human hand, by collecting the stereoscopic image of the grip, and by feature extraction, can determine the specific spatial position of the human hand according to the parameters extracted by the feature and recognize that the motion is a gripping action. .
  • the determining module 13 is configured to determine whether the human body part enters the predetermined sensing area according to the three-dimensional coordinates of the human body part;
  • the judging module 13 judges whether the human body part enters the predetermined sensing area from the three-dimensional coordinates of the human body part among the extracted feature parameters.
  • the predetermined sensing area here may be a predetermined spatial range.
  • the spatial area corresponding to the predetermined area on the left side of the screen is set as the predetermined sensing area, and the action of the spatial range outside the predetermined sensing area does not respond.
  • different predetermined sensing regions may be respectively set to correspond to different parameter adjustments, such as setting a predetermined spatial range on the upper side of the screen corresponding to the sensing and responding to the brightness adjustment, and the predetermined space on the lower side of the screen.
  • the range corresponds to the sense and responds to volume adjustments and more.
  • the setting may not be made.
  • the spatial range in which the stereoscopic image can be acquired by the 3D sensor is a predetermined sensing area, and different operations are performed only according to different motion responses.
  • the control module 14 is configured to control parameter adjustment according to a spatial motion trajectory of the human body part when the human body part enters the predetermined sensing area.
  • the control module 14 When the human body part enters the predetermined sensing area, the control module 14 performs parameter adjustment according to the spatial motion trajectory of the human body part in the feature extraction parameter.
  • the parameter adjustment herein may be, but is not limited to, at least one of volume adjustment, screen brightness adjustment, and screen scroll speed adjustment.
  • the parameter adjustment according to the spatial motion trajectory of the human body part specifically includes: matching the spatial motion trajectory of the human body part with the pre-stored spatial motion trajectory, and when the matching reaches the predetermined matching threshold value, controlling the parameter corresponding to the pre-stored spatial motion trajectory is controlled. Adjustment.
  • the action of pushing the palm up is correspondingly to increase the volume
  • the action of pushing the palm downward corresponds to the volume of the low volume.
  • the somatosensory interaction system When the body part leaves the predetermined sensing area, the somatosensory interaction system enters the locked state, and the interaction of the somatosensory control parameter adjustment can only be entered after the somatosensory interaction system is activated again.
  • the somatosensory interaction system of the present embodiment may further include a display module 16 for displaying an icon moving synchronously with the human body part on the screen when the human body part enters the predetermined sensing area.
  • the icon that moves synchronously with the human body part may be an icon similar to the human body part, for example, the human body part is a human hand, and the icon may be a hand shape icon. Of course, it can be other forms of icons, such as triangle icons, dot icons, and so on.
  • the icon on the screen follows the movement of the body part and moves correspondingly on the screen. For example, the human hand moves to the right in space, and the icon also moves to the right along the screen.
  • the display module 16 is further configured to display a corresponding prompt on the screen when the body part enters the predetermined sensing area.
  • the display module 16 may display the predetermined sensing area in a highlighted state to prompt the user to perform a control action within the predetermined sensing area.
  • the predetermined sensing area can also prompt the user in other ways. Such as highlighting, flashing display or display stripes and so on.
  • different parameters of different sensing regions may be preset, and different actions of different sensing regions may be adjusted corresponding to different parameters. For example, setting the area on the left side of the screen to the sensing area, the palm up motion control volume is adjusted, the palm down motion control volume is small, the screen on the right side of the screen, the sensing area, the fist movement, the upward movement, the screen brightness is increased, and the fist movement is moved downward to control the screen brightness adjustment. small.
  • the palm is detected in the predetermined sensing area on the left side of the screen and the palm moves upward, the volume is turned up, and the palm down motion control volume is turned down.
  • the upward movement control screen is brightened, and when the gripping action is detected, the downward movement control screen is darkened.
  • the action of clenching is detected in a predetermined area on the left side of the screen, or when the palm is detected in a predetermined area on the right side of the screen, no response is made.
  • the range of the 3D body sensor capable of acquiring the stereoscopic image is the sensing area, as long as the human body part enters the sensing area and senses the action, And matching with the preset action reaches a predetermined threshold, that is, performing an operation corresponding to the action.
  • the embodiment of the present invention further provides an electronic device, where the electronic device includes the somatosensory interaction system described in the foregoing embodiment.
  • the electronic device can be, but is not limited to, a smart TV, a smart phone, a tablet computer, a notebook computer, and the like.
  • the somatosensory interaction system collects a three-dimensional stereoscopic image of a human body part in a state of activating a somatosensory interaction system, and extracts characteristic parameters according to a three-dimensional stereoscopic image of the human body part, wherein the characteristic parameters include a human body part.
  • the three-dimensional coordinates and the spatial motion trajectory of the human body part determine whether the human body part enters the predetermined sensing area according to the three-dimensional coordinates of the human body part.
  • the parameter adjustment is performed according to the spatial motion trajectory control of the human body part. In this way, it is not necessary to rely on the external input device, and the parameter adjustment can be controlled by sensing the spatial action of the human body part, thereby giving the user a better use experience.
  • the disclosed system, apparatus, and method may be implemented in other manners.
  • the device embodiments described above are merely illustrative.
  • the division of the modules or units is only a logical function division.
  • there may be another division manner for example, multiple units or components may be used. Combinations can be integrated into another system, or some features can be ignored or not executed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, and may be in an electrical, mechanical or other form.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the above integrated unit can be implemented in the form of hardware or in the form of a software functional unit.
  • the integrated unit if implemented in the form of a software functional unit and sold or used as a standalone product, may be stored in a computer readable storage medium.
  • the technical solution of the present invention which is essential or contributes to the prior art, or all or part of the technical solution, may be embodied in the form of a software product stored in a storage medium.
  • a number of instructions are included to cause a computer device (which may be a personal computer, server, or network device, etc.) or a processor to perform all or part of the steps of the methods described in various embodiments of the present invention.
  • the foregoing storage medium includes: a U disk, a mobile hard disk, a read only memory (ROM, Read-Only) Memory, random access memory (RAM), disk or optical disk, and other media that can store program code.

Abstract

Disclosed are a motion sensing screen scrolling control method, a motion sensing interaction system and an electronic device. The motion sensing screen scrolling control method comprises: collecting a three-dimensional stereo image of a human body part in a motion sensing interaction system activation state; extracting a feature of the three-dimensional stereo image of the human body part to acquire a feature parameter, wherein the feature parameter comprises a three-dimensional coordinate of the human body part and a motion trajectory of the human body part; judging whether the human body part enters a predetermined sensing region according to the three-dimensional coordinate of the human body part; and when the human body part enters the predetermined sensing region, controlling the scrolling of a screen on the basis of the motion trajectory of the human body part and according to a predetermined rule. By means of the above-mentioned method, the present invention can control screen scrolling by sensing a spatial action of a human body part without the need for relying on an external input device, providing a user with a better use experience.

Description

一种体感控制参数调整的方法、体感交互系统及电子设备 Method for adjusting body shape control parameter, somatosensory interaction system and electronic device
【技术领域】[Technical Field]
本发明人机交互领域,具体涉及一种体感控制屏幕滚动的方法、体感交互系统及电子设备。The invention relates to the field of human-computer interaction, and particularly relates to a method for sensing a screen scrolling, a somatosensory interaction system and an electronic device.
【背景技术】 【Background technique】
人机交互技术是指通过输入输出设备,以有效的方式实现人与机器对话的技术。现有的人机交互的交互方式通常是通过鼠标、键盘、触摸屏或者手柄等外部设备与机器系统进行交互,机器系统再做出相应的响应。比如当需要控制音量调节时,要么通过鼠标拖动实现音量调节,要么通过人体部位如手指等在触摸屏上进行滑动实现音量调节。Human-computer interaction technology refers to the technology of realizing human-machine dialogue in an efficient manner through input and output devices. The existing interaction mode of human-computer interaction usually interacts with the machine system through external devices such as a mouse, a keyboard, a touch screen or a handle, and the machine system responds accordingly. For example, when it is necessary to control the volume adjustment, the volume adjustment is implemented by a mouse drag, or the volume adjustment is performed by sliding a human body part such as a finger on the touch screen.
现有的控制参数调节的方式,都必须与输入设备比如鼠标、触摸屏进行直接接触才能完成参数调整操作,使得用户控制参数调整的操作必须依赖于外部设备,束缚用户的行为方式,具体实现显得不自然,不真实。The existing control parameter adjustment methods must be in direct contact with input devices such as a mouse and a touch screen to complete the parameter adjustment operation, so that the user control parameter adjustment operation must rely on the external device to restrain the user's behavior, and the specific implementation does not appear. Natural, not true.
【发明内容】  [Summary of the Invention]
本发明主要解决的技术问题是提供一种体感控制参数调整的方法、体感交互系统及电子设备,能够不需要依赖外部输入设备,通过感应人体部位的动作,就能控制进行参数调整。The technical problem to be solved by the present invention is to provide a method for adjusting body shape control parameters, a somatosensory interaction system and an electronic device, which can control parameter adjustment by sensing the action of a human body part without relying on an external input device.
为解决上述技术问题,本发明采用的一个技术方案是:提供一种体感控制参数调整的方法,所述方法包括:在体感交互系统激活状态下,采集人体部位的三维立体图像;将所述人体部位的三维立体图像进行特征提取获取特征参数,所述特征参数包括所述人体部位的三维坐标以及所述人体部位的空间运动轨迹;根据所述人体部位的三维坐标,判断所述人体部位是否进入预定感应区域;当所述人体部位进入预定感应区域时,控制根据所述人体部位的空间运动轨迹进行参数调整。In order to solve the above technical problem, a technical solution adopted by the present invention is to provide a method for adjusting a body feeling control parameter, the method comprising: collecting a three-dimensional stereoscopic image of a human body part in an activated state of the somatosensory interaction system; Feature extraction of the three-dimensional image of the part to obtain feature parameters, the feature parameter includes three-dimensional coordinates of the human body part and a spatial motion track of the human body part; and determining whether the human body part enters according to the three-dimensional coordinates of the human body part The sensing area is predetermined; when the body part enters the predetermined sensing area, the parameter adjustment is performed according to the spatial motion trajectory of the body part.
其中,所述参数包括音量、亮度、屏幕滚动速度的至少一种。The parameter includes at least one of volume, brightness, and screen scrolling speed.
其中,所述方法还包括:当所述人体部位进入预定感应区域时,在屏幕上显示与所述人体部位同步移动的图标。Wherein, the method further comprises: when the human body part enters the predetermined sensing area, displaying an icon moving synchronously with the human body part on the screen.
其中,所述方法还包括:当所述人体部位进入预定感应区域时,屏幕显示相应的提示。Wherein, the method further comprises: when the human body part enters the predetermined sensing area, the screen displays a corresponding prompt.
其中,所述人体部位为人手。Wherein, the human body part is a human hand.
为解决上述技术问题,本发明采用的另一个技术方案是:提供一种体感交互系统,所述系统包括采集模块、特征提取模块、判断模块以及控制模块,其中:所述采集模块用于在体感交互系统激活状态下,采集人体部位的三维立体图像;所述特征提取模块用于将所述人体部位的三维立体图像进行特征提取获取特征参数,所述特征参数包括所述人体部位的三维坐标以及所述人体部位的空间运动轨迹;所述判断模块用于根据所述人体部位的三维坐标,判断所述人体部位是否进入预定感应区域;所述控制模块用于,当所述人体部位进入预定感应区域时,控制根据所述人体部位的空间运动轨迹进行参数调整。In order to solve the above technical problem, another technical solution adopted by the present invention is to provide a somatosensory interaction system, the system includes an acquisition module, a feature extraction module, a judgment module, and a control module, wherein: the acquisition module is used for the sense of body Acquiring a three-dimensional image of a human body part in an active state; the feature extraction module is configured to perform feature extraction on the three-dimensional image of the human body part to obtain a feature parameter, where the feature parameter includes a three-dimensional coordinate of the human body part and a spatial motion trajectory of the human body part; the determining module is configured to determine whether the human body part enters a predetermined sensing area according to the three-dimensional coordinates of the human body part; and the control module is configured to: when the human body part enters a predetermined sensing In the region, the control performs parameter adjustment according to the spatial motion trajectory of the human body part.
其中,所述参数包括音量、亮度、屏幕滚动速度的至少一种。The parameter includes at least one of volume, brightness, and screen scrolling speed.
其中,所述系统还包括显示模块,所述显示模块用于,当所述人体部位进入预定感应区域时,在屏幕上显示与所述人体部位同步移动的图标。The system further includes a display module, and the display module is configured to display an icon moving synchronously with the human body part on the screen when the human body part enters the predetermined sensing area.
其中,所述显示模块还用于当所述人体部位进入预定感应区域时,在屏幕上显示相应的提示。The display module is further configured to display a corresponding prompt on the screen when the human body part enters the predetermined sensing area.
为解决上述技术问题,本发明提供的还有一种技术方案是:提供一种电子设备,所述电子设备体感交互系统,所述体感交互系统包括采集模块、特征提取模块、判断模块以及控制模块,其中:所述采集模块用于在体感交互系统激活状态下,采集人体部位的三维立体图像;所述特征提取模块用于将所述人体部位的三维立体图像进行特征提取获取特征参数,所述特征参数包括所述人体部位的三维坐标以及所述人体部位的空间运动轨迹;所述判断模块用于根据所述人体部位的三维坐标,判断所述人体部位是否进入预定感应区域;所述控制模块用于,当所述人体部位进入预定感应区域时,根据所述人体部位的空间运动轨迹进行参数调整。In order to solve the above technical problem, the present invention provides a technical solution for providing an electronic device, the electronic device somatosensory interaction system, the somatosensory interaction system includes an acquisition module, a feature extraction module, a judgment module, and a control module. The acquiring module is configured to collect a three-dimensional stereoscopic image of a human body part in an activated state of the somatosensory interaction system; the feature extraction module is configured to perform feature extraction on the three-dimensional stereoscopic image of the human body part to obtain a feature parameter, where the feature The parameter includes a three-dimensional coordinate of the human body part and a spatial motion trajectory of the human body part; the determining module is configured to determine whether the human body part enters a predetermined sensing area according to the three-dimensional coordinates of the human body part; When the body part enters the predetermined sensing area, the parameter adjustment is performed according to the spatial motion trajectory of the body part.
其中,所述参数包括音量、亮度、屏幕滚动速度的至少一种。The parameter includes at least one of volume, brightness, and screen scrolling speed.
其中,所述系统还包括显示模块,所述显示模块用于,当所述人体部位进入预定感应区域时,在屏幕上显示与所述人体部位同步移动的图标。The system further includes a display module, and the display module is configured to display an icon moving synchronously with the human body part on the screen when the human body part enters the predetermined sensing area.
其中,所述显示模块还用于当所述人体部位进入预定感应区域时,在屏幕显示相应的提示。The display module is further configured to display a corresponding prompt on the screen when the human body part enters the predetermined sensing area.
本发明的有益效果是:区别于现有技术的情况,本发明在激活体感交互系统的状态下,采集人体部位的三维立体图像,根据人体部位的三维立体图像进行特征提取获取特征参数,其中特征参数包括人体部位的三维坐标以及人体部位的空间运动轨迹,根据人体部位的三维坐标,判断人体部位是否进入预定感应区域,当人体部位进入预定感应区域时,根据人体部位的空间运动轨迹控制进行参数调整。通过这样的方式,不需要依赖于外部输入设备,通过感应人体部位的空间动作,就能控制参数调整,给用户更好的使用体验。 The invention has the beneficial effects that the three-dimensional stereoscopic image of the human body part is collected in the state of activating the somatosensory interaction system, and the feature extraction is performed according to the three-dimensional stereoscopic image of the human body part to obtain the characteristic parameter, wherein the feature is different from the prior art. The parameters include the three-dimensional coordinates of the human body part and the spatial motion trajectory of the human body part. According to the three-dimensional coordinates of the human body part, it is determined whether the human body part enters the predetermined sensing area, and when the human body part enters the predetermined sensing area, the parameter is controlled according to the spatial motion trajectory of the human body part. Adjustment. In this way, it is not necessary to rely on the external input device, and the parameter adjustment can be controlled by sensing the spatial action of the human body part, thereby giving the user a better use experience.
【附图说明】 [Description of the Drawings]
图1是本发明实施提供的体感控制参数调整的方法的流程图;1 is a flow chart of a method for adjusting a somatosensory control parameter provided by an implementation of the present invention;
图2是本发明实施例提供的激活体感交互系统的流程图;2 is a flowchart of an activated somatosensory interaction system according to an embodiment of the present invention;
图3是本发明实施例提供的一种体感交互系统的结构示意图;3 is a schematic structural diagram of a somatosensory interaction system according to an embodiment of the present invention;
图4是本发明实施例提供的另一种体感交互系统的结构示意图;4 is a schematic structural diagram of another somatosensory interaction system according to an embodiment of the present invention;
图5是本发明实施例提供的体感交互系统中激活模块的结构示意图;FIG. 5 is a schematic structural diagram of an activation module in a somatosensory interaction system according to an embodiment of the present invention; FIG.
图6是本发明实施例提供的通过体感控制屏幕滚动速度调节的操作示意图;6 is a schematic diagram of an operation of adjusting a screen scroll speed by a somatosensory control according to an embodiment of the present invention;
图7是本发明实施例提供的通过体感控制屏幕亮度调节的操作示意图;FIG. 7 is a schematic diagram of operation of controlling brightness of a screen by a somatosensory control according to an embodiment of the present invention; FIG.
图8是本发明实施例提供的通过体感控制音量调节的操作示意图。FIG. 8 is a schematic diagram of an operation of controlling volume adjustment by a sense of body according to an embodiment of the present invention.
【具体实施方式】【detailed description】
请参阅图1,图1是本发明实施例提供的一种体感控制参数调整的方法,如图所示,本实施例的体感控制参数调整的方法包括:Referring to FIG. 1 , FIG. 1 is a method for adjusting a somatosensory control parameter according to an embodiment of the present invention. As shown in the figure, a method for adjusting a somatosensory control parameter of the embodiment includes:
S101:在体感交互系统激活状态下,采集人体部位的三维立体图像;S101: collecting a three-dimensional stereoscopic image of a human body part in an activated state of the somatosensory interaction system;
在本发明实施例中,需要进行体感交互之前,需要先激活体感交互系统。In the embodiment of the present invention, the somatosensory interaction system needs to be activated before the somatosensory interaction needs to be performed.
其中,请进一步参阅图2,图2是本发明实施例提供的激活体感交互系统的流程图,如图所示,激活体感交互系统包括以下步骤:For further reference to FIG. 2, FIG. 2 is a flowchart of an activated somatosensory interaction system according to an embodiment of the present invention. As shown in the figure, the activation of the somatosensory interaction system includes the following steps:
S201:采集三维立体图像;S201: collecting a three-dimensional stereoscopic image;
通过3D传感器采集预定空间范围内的三维立体图像。所采集的三维立体图像包括3D传感器镜头监控范围的所有物体。比如3D传感器镜头前包括桌子、椅子以及人,那么所采集的三维立体图像包括所有的这些物件。A three-dimensional stereoscopic image within a predetermined spatial range is acquired by a 3D sensor. The acquired three-dimensional stereoscopic image includes all objects of the 3D sensor lens monitoring range. For example, the 3D sensor lens includes a table, a chair, and a person, and the acquired three-dimensional image includes all of these objects.
S202:对三维立体图像进行处理,判断三维立体图像是否包含用于激活体感交互系统的人体部位的三维立体图像;S202: processing the three-dimensional stereoscopic image, determining whether the three-dimensional stereoscopic image includes a three-dimensional stereoscopic image for activating a human body part of the somatosensory interaction system;
对3D体感器采集的三维立体图形进行处理,判断该三维立体图像中是否包含用于激活体感交互系统的人体部位的三维立体图像。比如预设的用于激活体感交互系统的人体部位为人手,则从采集的三维立体图像中识别是否包括人手。如果三维立体图像中包括用于激活体感交互系统的人体部位的三维立体图像,则执行步骤S203。The three-dimensional stereoscopic image acquired by the 3D sensor is processed to determine whether the three-dimensional stereoscopic image includes a three-dimensional stereoscopic image for activating a human body part of the somatosensory interaction system. For example, if the preset human body part for activating the somatosensory interaction system is a human hand, it is recognized whether the human hand is included from the acquired three-dimensional stereoscopic image. If a three-dimensional stereoscopic image for activating a human body part of the somatosensory interaction system is included in the three-dimensional stereoscopic image, step S203 is performed.
S203:对人体部位的三维立体图像进行处理,转化为激活指令;S203: processing a three-dimensional stereoscopic image of a human body part, and converting into an activation instruction;
其中,对人体部位的三维立体图像进行处理,转化为激活指令具体包括:将人体部位的三维立体图像进行特征提取获取特征参数,特征参数包括人体部位的三维坐标以及人体部位的空间运动轨迹,将特征参数与预存的激活体感交互系统的特征参数进行匹配,当匹配度达到预定阈值时,获取与预存的特征参数对应的指令,以作为激活指令。The processing of the three-dimensional image of the human body part into the activation instruction specifically includes: extracting the feature parameters of the three-dimensional image of the human body part, and the characteristic parameters include the three-dimensional coordinates of the human body part and the spatial motion track of the human body part, The feature parameter is matched with the feature parameter of the pre-stored activated somatosensory interaction system. When the matching degree reaches a predetermined threshold, an instruction corresponding to the pre-stored feature parameter is acquired as an activation command.
将采集的人体部位的三维立体图像进行特征提取,获取得到特征参数,特征参数包括位置参数以及动作轨迹参数。位置参数即人体部位所处的空间位置,用三维坐标表示,运动轨迹即人体部位在空间上的运动轨迹。比如一个手掌抓握的动作,其参数提取即包括手掌当前所处的实际三维坐标X、Y、Z的具体数值,以确定手掌与3D传感器的具体位置关系。还包括手掌的空间运动轨迹即抓握的动作轨迹。The three-dimensional images of the collected human body parts are extracted, and the feature parameters are obtained, and the feature parameters include position parameters and motion track parameters. The positional parameter, that is, the spatial position of the human body part, is represented by three-dimensional coordinates, that is, the motion trajectory of the human body part in space. For example, a palm gripping action, the parameter extraction includes the actual three-dimensional coordinates X, Y, Z of the palm of the hand to determine the specific positional relationship between the palm and the 3D sensor. It also includes the spatial motion trajectory of the palm, that is, the motion trajectory of the grip.
在提取得到特征参数以后,将提取的特征参数与预存的用于激活体感交互系统的特征参数进行匹配。After the feature parameters are extracted, the extracted feature parameters are matched with the pre-stored feature parameters for activating the somatosensory interaction system.
比如通过一个手掌的动作来激活体感交互系统,预存的手掌的动作的特征参数包括A、B、C,当前采集到一个手掌的三维立体图像,提取得到的特征参数包括A'、B'、C',将A'、B'、C'与A、B、C进行匹配,并判断匹配度是否达到预定阈值。For example, through a palm motion, the somatosensory interaction system is activated. The characteristic parameters of the pre-stored palm motion include A, B, and C. Currently, a three-dimensional image of a palm is collected, and the extracted feature parameters include A', B', and C. ', A', B', C' are matched with A, B, C, and it is judged whether the matching degree reaches a predetermined threshold.
当提取的特征参数与预存的用于激活体感交互系统的特征参数的匹配度达到预定阈值时,获取与预存的特征参数对应的指令,以作为激活指令。预定阈值是预先设置的匹配程度的值,比如可以设置预定阈值为80%,当匹配度达到80%或以上,即获取与预存的特征参数对应的指令,以作为激活指令。When the extracted feature parameter matches the pre-stored feature parameter for activating the somatosensory interaction system to reach a predetermined threshold, an instruction corresponding to the pre-stored feature parameter is acquired as an activation command. The predetermined threshold is a value of a preset matching degree. For example, the predetermined threshold may be set to 80%. When the matching degree reaches 80% or more, an instruction corresponding to the pre-stored characteristic parameter is acquired as an activation instruction.
作为一种优选的实现方案,在提取的特征参数与预存的用于激活体感交互系统的特征参数的匹配度达到预定阈值时,系统可以进一步判断所述三维立体图像中包含用于激活体感交互系统的人体部位的三维立体图像所持续的时间是否达到预定时间。在持续的时间达到预定时间时,才获取与预存的特征参数对应的指令,以作为激活指令。通过这样的方式,可以有效防止误触发激活体感交互系统。As a preferred implementation, when the matching degree of the extracted feature parameter and the pre-stored feature parameter for activating the somatosensory interaction system reaches a predetermined threshold, the system may further determine that the three-dimensional stereoscopic image includes a system for activating the somatosensory interaction system. Whether the three-dimensional image of the human body part lasts for a predetermined time. When the predetermined time reaches the predetermined time, the instruction corresponding to the pre-stored feature parameter is acquired as the activation command. In this way, it is possible to effectively prevent the false trigger from activating the somatosensory interaction system.
比如预设的用于激活体感交行系统的人体部位为手掌。当3D体感器前面用户当前正与另一个人在聊天,过程中可能不经意的会做出一个手掌的动作,系统在采集并识别出这个手掌动作之后,进一步判断立体图像中包含该手掌是否持续预定时间,如果没有持续达到预定时间,可以判断到这只是一个误操作,则不会激活体感交互系统。其中,这里的预定时间可以根据需要预先设定,比如设定为10秒、30秒等等。For example, the preset body part for activating the somatosensory intersection system is the palm of the hand. When the user in front of the 3D sensor is currently chatting with another person, a palm motion may be inadvertently made. After the system collects and recognizes the palm motion, it further determines whether the palm image contains the palm or not. Time, if it does not continue to reach the scheduled time, it can be judged that this is only a misoperation, then the somatosensory interaction system will not be activated. Wherein, the predetermined time here can be preset according to needs, for example, set to 10 seconds, 30 seconds, and the like.
作为一种更进一步的优选方案,当系统采集并识别到人体部位且持续未达到预定时间之前,可以在屏幕上显示一个进度条,以提示体感交互系统激活状态。进度条可以实时的,以图片形式显示系统激活的速度,完成度,剩余未完成任务量的大小,和可能需要处理时间。作为一种可能的实现,进度条可以长方形条状显示。当进度条满时,即表示达到激活体感交互系统条件,激活体感交互系统。通过这样的方式,能够让用户对体感交互系统激活状态的有直观的了解,也能够让误操作的用户及时停止手势动作以避免误触发激活体感交互系统。As a still further preferred solution, a progress bar may be displayed on the screen to prompt the somatosensory interaction system activation state when the system collects and recognizes the human body part and does not reach the predetermined time. The progress bar can display the speed of the system activation, the degree of completion, the amount of remaining unfinished tasks, and the processing time that may be required in real time. As a possible implementation, the progress bar can be displayed in a rectangular strip. When the progress bar is full, it means that the somatosensory interaction system condition is activated and the somatosensory interaction system is activated. In this way, the user can have an intuitive understanding of the activation state of the somatosensory interaction system, and can also cause the user who misunderstands to stop the gesture action in time to avoid false triggering of the activation of the somatosensory interaction system.
S204:根据激活指令激活体感交互系统。S204: Activate the somatosensory interaction system according to the activation instruction.
根据获取的激活指令,激活体感交互系统,以进入体感交互状态。According to the acquired activation instruction, the somatosensory interaction system is activated to enter the somatosensory interaction state.
其中,在激活体感交互系统之后,可以给用户相应的体感交互系统已被激活的提示。其中,可以通过将屏幕预定区域以高亮状态进行显示作为提示。这里的预定区域可以是预设的体感感应区域对应到屏幕上的平面区域,比如屏幕左侧的一定面积的区域,或屏幕右侧一定面积的区域等。当然,也可以是整个屏幕。After the activation of the somatosensory interaction system, the user may be prompted to activate the corresponding somatosensory interaction system. Among them, it can be prompted by displaying the predetermined area of the screen in a highlighted state. The predetermined area here may be a preset body sensing area corresponding to a plane area on the screen, such as an area of a certain area on the left side of the screen, or an area of a certain area on the right side of the screen. Of course, it can also be the entire screen.
当然,也可以通过其他的方式给用户提示,比如通过弹出体感交互系统已激活的提示,或者通过语音提示等等,本发明对此不作限定。Of course, the user may also be prompted by other means, such as by popping up a prompt that the somatosensory interaction system has been activated, or by a voice prompt, etc., which is not limited by the present invention.
另外,在体感交互系统被激活后,在屏幕上显示与人体部位同步移动的图标。其中,与人体部位同步移动的图标可以是跟所述人体部位相似的图标,比如人体部位是人手,该图标可以是一个手形状的图标。当然,也可以是其他形式的图标,比如三角图标,圆点图标等。该屏幕上的图标跟随人体部位的移动而在屏幕上对应移动。比如人手在空间上向右移动,图标也跟随在屏幕上向右移动。In addition, after the somatosensory interaction system is activated, an icon that moves in synchronization with the human body part is displayed on the screen. The icon that moves synchronously with the human body part may be an icon similar to the human body part, for example, the human body part is a human hand, and the icon may be a hand shape icon. Of course, it can be other forms of icons, such as triangle icons, dot icons, and so on. The icon on the screen follows the movement of the body part and moves correspondingly on the screen. For example, the human hand moves to the right in space, and the icon also moves to the right along the screen.
在体感交互系统激活状态下,通过3D红外传感器采集预定空间范围内人体部位的三维立体图像。3D红外传感器能够采集空间位置上物件的三维立体图像,所采集的三维立体图像包括物件所处的空间位置坐标以及空间运动轨迹。In the activated state of the somatosensory interaction system, a three-dimensional stereoscopic image of the human body part within a predetermined spatial range is acquired by the 3D infrared sensor. The 3D infrared sensor can collect a three-dimensional image of the object in the spatial position, and the acquired three-dimensional image includes the spatial position coordinates of the object and the spatial motion track.
本发明实施例所述的空间运动轨迹,包括人体部件的姿势以及人体部件的具体动作。比如用户在3D体感器前面做一个握拳的姿势并在空间范围内滑动,那么3D体感器采集该用户手部的三维立体图像,对该手部的三维立体图像进行特征提取,即获取到该手部距离3D传感器的三维坐标,以及该手部的握拳姿势和滑动的动作。其他三维立体图像的处理与此类似,本实施例不一一举例进行说明。The spatial motion trajectory described in the embodiments of the present invention includes the posture of the human body component and the specific action of the human body component. For example, if the user makes a fist gesture in front of the 3D body sensor and slides in the space range, the 3D body sensor collects the three-dimensional image of the user's hand, and extracts the feature of the three-dimensional image of the hand, that is, the hand is obtained. The distance from the 3D sensor to the 3D coordinates of the 3D sensor, as well as the gripping and sliding movements of the hand. The processing of other three-dimensional stereoscopic images is similar to this, and the present embodiment will not be described by way of example.
其中,本发明实施例所提到的人体部位可以是人手。当然也可以是其他的用于操作的人体部位比如人脸、人脚等等。The human body part mentioned in the embodiment of the present invention may be a human hand. Of course, it can also be other human body parts for operation such as a human face, a human foot, and the like.
S102:将人体部位的三维立体图像进行特征提取获取特征参数,特征参数包括人体部位的三维坐标以及人体部位的空间运动轨迹;S102: Perform feature extraction on a three-dimensional image of a human body part to obtain a feature parameter, where the feature parameter includes a three-dimensional coordinate of the human body part and a spatial motion track of the human body part;
将所采集的人体部位的三维立体图像进行特征提取,以获取该采集的三维立体空间的特征参数,其中,这些特征参数包括该人体部位所处的空间三维坐标以及人体部位的空间运动轨迹。通过特征提取,能够识别人体部位距离3D传感器的具体空间位置以及人体部位的动作。比如人手做的一个抓握的动作,通过采集该抓握的立体图像,并通过特征提取,就能根据该特征提取的参数确定这个人手的具体空间位置并识别出该动作为一个抓握的动作。The three-dimensional image of the collected human body part is subjected to feature extraction to obtain the feature parameters of the acquired three-dimensional space, wherein the feature parameters include the spatial three-dimensional coordinates of the human body part and the spatial motion track of the human body part. By feature extraction, it is possible to recognize the specific spatial position of the body part from the 3D sensor and the action of the body part. For example, a gripping action by a human hand, by collecting the stereoscopic image of the grip, and by feature extraction, can determine the specific spatial position of the human hand according to the parameters extracted by the feature and recognize that the motion is a gripping action. .
作为一种可能的实现方式,本发明对动作的识别之前,包括一个学习训练以建立一个训练数据库的过程。比如为识别一个人手抓握的动作,系统会采集各种不同的抓握动作的三维立体图像,对这些不同的抓握动作进行学习,以获取用于识别这个具体动作的具体特征参数。针对每个不同的动作,系统都会做这么一个学习训练过程,各种不同具体动作对应的具体特征参数,构成训练数据库。当系统获取到一个三维立体图像时,就会将该立体图像进行特征提取,到训练数据库中找到与之匹配的具体动作,以作为识别结果。As a possible implementation, prior to identifying the action, the present invention includes a process of learning training to establish a training database. For example, in order to recognize the movement of a human hand, the system collects three-dimensional images of various gripping actions, and learns these different gripping actions to obtain specific feature parameters for identifying this specific action. For each different action, the system will do such a learning training process, and the specific characteristic parameters corresponding to various specific actions constitute a training database. When the system acquires a three-dimensional stereo image, the stereo image is extracted from the feature, and a specific action matching the same is found in the training database as a recognition result.
S103:根据人体部位的三维坐标,判断人体部位是否进入预定感应区域;S103: determining, according to the three-dimensional coordinates of the human body part, whether the human body part enters the predetermined sensing area;
从提取的特征参数中的人体部位的三维坐标,判断人体部位是否进入预定感应区域。其中,这里的预定感应区域可以是预先设置的一定空间范围。比如设置屏幕左侧预定面积对应的空间区域为预定感应区域,在该预定感应区域之外的空间范围的动作不做响应。或者当需要通过不同的体感动作执行不同的参数调整时,可以分别设置不同的预定感应区域对应不同的参数调整,比如设置屏幕上侧预定空间范围对应感应并响应进行亮度调节,屏幕下侧预定空间范围对应感应并响应音量调节等等。当然,也可以不做设定,这种情况下,3D体感器所能采集到立体图像的空间范围都为预定感应区域,只根据不同的动作响应进行不同的操作。From the three-dimensional coordinates of the human body part in the extracted feature parameters, it is determined whether the human body part enters the predetermined sensing area. Wherein, the predetermined sensing area here may be a predetermined spatial range. For example, the spatial area corresponding to the predetermined area on the left side of the screen is set as the predetermined sensing area, and the action of the spatial range outside the predetermined sensing area does not respond. Or when different parameter adjustments need to be performed through different somatosensory actions, different predetermined sensing regions may be respectively set to correspond to different parameter adjustments, such as setting a predetermined spatial range on the upper side of the screen corresponding to the sensing and responding to the brightness adjustment, and the predetermined space on the lower side of the screen. The range corresponds to the sense and responds to volume adjustments and more. Of course, the setting may not be made. In this case, the spatial range in which the stereoscopic image can be acquired by the 3D sensor is a predetermined sensing area, and different operations are performed only according to different motion responses.
其中,当人体部位进入预定感应区域时,屏幕上显示相应的提示。其中,作为一种可能的提示方式,可以将预定感应区域以高亮状态进行显示以提示用户可以在该预定感应区域内执行控制动作。当然,预定感应区域也可以以别的方式提示用户。比如突出显示,闪动显示或者显示条纹等等。Wherein, when the human body part enters the predetermined sensing area, a corresponding prompt is displayed on the screen. Wherein, as a possible prompting manner, the predetermined sensing area may be displayed in a highlighted state to prompt the user to perform a control action within the predetermined sensing area. Of course, the predetermined sensing area can also prompt the user in other ways. Such as highlighting, flashing display or display stripes and so on.
当特征参数中的人体部位的三维坐标落入预定感应区域范围,则执行步骤S104。否则,不进行响应。When the three-dimensional coordinates of the human body part in the feature parameter fall within the predetermined sensing area range, step S104 is performed. Otherwise, no response is made.
S104:控制根据人体部位的空间运动轨迹进行参数调整;S104: Control parameter adjustment according to a spatial motion trajectory of the human body part;
当人体部位进入预定感应区域时,根据特征提取参数中人体部位的空间运动轨迹进行参数调整。这里的参数调整可以但不限于是音量调节、屏幕亮度调节、屏幕滚动速度调节中的至少一种。When the human body part enters the predetermined sensing area, the parameter adjustment is performed according to the spatial motion trajectory of the human body part in the feature extraction parameter. The parameter adjustment herein may be, but is not limited to, at least one of volume adjustment, screen brightness adjustment, and screen scroll speed adjustment.
其中,根据人体部位的空间运动轨迹进行参数调整具体包括:将人体部位的空间运动轨迹与预存的空间运动轨迹进行匹配,当匹配达到预定匹配阈值时,控制执行与预存的空间运动轨迹对应的参数调整。The parameter adjustment according to the spatial motion trajectory of the human body part specifically includes: matching the spatial motion trajectory of the human body part with the pre-stored spatial motion trajectory, and when the matching reaches the predetermined matching threshold value, controlling the parameter corresponding to the pre-stored spatial motion trajectory is controlled. Adjustment.
比如预先设置手掌向上推动的动作对应调高音量,手掌向下推动的动作对应调低音量,当采集到手掌向上推动的动作时即控制将音量调高,采集到手掌向下推动的动作即控制将音量调低。For example, the action of pushing the palm up is correspondingly to increase the volume, and the action of pushing the palm downward corresponds to the volume of the low volume. When the action of pushing the palm upward is controlled, the volume is turned up, and the action of pulling the palm downward is controlled. Turn the volume down.
当人体部位离开预定感应区域时,体感交互系统进入锁定状态,只有再次激活体感交互系统后才能进入体感控制参数调整的交互。When the body part leaves the predetermined sensing area, the somatosensory interaction system enters the locked state, and the interaction of the somatosensory control parameter adjustment can only be entered after the somatosensory interaction system is activated again.
本发明实施例中,还可以通过预设不同的感应区域,并设置不同感应区不同的动作对应不同的参数调整。比如设置屏幕左侧预定区域感应区手掌向上运动控制音量调大,手掌向下运动控制音量调小,屏幕右侧预定感应区握拳动作向上移动屏幕亮度调大,握拳动作向下移动控制屏幕亮度调小。当在屏幕左侧预定感应区检测到手掌并且手掌向上运动时将音量调大,手掌向下运动控制音量调小。当在屏幕右侧的预定感应区检测到握拳动作向上移动控制屏幕变亮,当检测到握拳动作向下移动控制屏幕变暗。当然,在这样的设置条件下,如果在屏幕左侧预定区域检测到握拳的动作,或在屏幕右侧预定区域检测到手掌时,不进行响应。In the embodiment of the present invention, different parameters of different sensing regions may be preset, and different actions of different sensing regions may be adjusted corresponding to different parameters. For example, setting the area on the left side of the screen to the sensing area, the palm up motion control volume is adjusted, the palm down motion control volume is small, the screen on the right side of the screen, the sensing area, the fist movement, the upward movement, the screen brightness is increased, and the fist movement is moved downward to control the screen brightness adjustment. small. When the palm is detected in the predetermined sensing area on the left side of the screen and the palm moves upward, the volume is turned up, and the palm down motion control volume is turned down. When the gripping action is detected in the predetermined sensing area on the right side of the screen, the upward movement control screen is brightened, and when the gripping action is detected, the downward movement control screen is darkened. Of course, under such setting conditions, if the action of clenching is detected in a predetermined area on the left side of the screen, or when the palm is detected in a predetermined area on the right side of the screen, no response is made.
在不设置预定区域对应不同的操作时,可以只设置不同的动作对应不同的操作,3D体感器所能采集到立体图像的区域范围都是感应区,只要人体部位进入感应区并感应到动作,并且与预设的动作匹配达到预定阈值,即执行与动作对应的操作。When different operations are not set in the predetermined area, only different actions may be set to correspond to different operations, and the range of the 3D body sensor capable of acquiring the stereoscopic image is the sensing area, as long as the human body part enters the sensing area and senses the action, And matching with the preset action reaches a predetermined threshold, that is, performing an operation corresponding to the action.
请参阅图6,图6是本发明实施例提供的通过体感控制屏幕滚动速度的操作示意图,如图所示,当人手往左边滑动时控制屏幕滚动加快,当人手往右边滑动时控制屏幕滚动速度放慢。当然,这只是示例。也可以设置人手往右边滑动时控制屏幕滚动加快,人手往左边滑动时控制屏幕滚动速度放慢。Please refer to FIG. 6. FIG. 6 is a schematic diagram of the operation of controlling the scrolling speed of the screen by the sense of body according to the embodiment of the present invention. As shown in the figure, when the human hand slides to the left, the control screen scrolls faster, and when the human hand slides to the right, the screen scrolling speed is controlled. slow. Of course, this is just an example. You can also set the control screen to scroll faster when the person slides to the right, and the control screen scrolls slowly when the person slides to the left.
请参阅图7,图7是本发明实施例提供的通过体感控制屏幕亮度调节的操作示意图,如图所示,第一个示意图,示意出通过右手握拳向下移动的动作控制屏幕亮度降低,中间的示意图示意出通过右手握拳移动到中间的动作控制屏幕以正常亮度显示,第三个示意图,示意出通过右手握拳向上一定的动作控制屏幕亮度增加。Please refer to FIG. 7. FIG. 7 is a schematic diagram of the operation of adjusting the brightness of the screen through the somatosensory control according to the embodiment of the present invention. As shown in the figure, the first schematic diagram illustrates the motion control screen brightness reduction by moving the fist with the right hand. The schematic diagram shows that the action control screen moved to the middle by the right hand fist is displayed in normal brightness, and the third schematic diagram shows that the brightness of the screen is controlled by a certain action by the right hand fist.
请参阅图8,图8是本发明实施例提供的通过体感控制音量调节的操作示意图,如图所示,第一个示意图,示意出通过左手握拳向下的动作控制音量降低,中间的示意图示意出通过左手握拳移动到中间的动作控制为正常音量,第三个示意图,示意出通过左手握拳向上的动作控制音量增加。Please refer to FIG. 8. FIG. 8 is a schematic diagram of the operation of controlling the volume adjustment by the somatosensory according to the embodiment of the present invention. As shown in the figure, the first schematic diagram illustrates the downward control of the volume by the left hand to control the volume reduction. The action of moving to the middle by the left hand fist is controlled to the normal volume, and the third schematic diagram shows that the volume of the upward movement by the left hand is controlled to increase the volume.
当然,以上的示意图只是示意性的,具体什么动作对应控制执行什么样的操作,可以根据需要设置。Of course, the above schematic diagram is only schematic, and what action corresponds to what kind of operation is performed by the control may be set as needed.
本发明在激活体感交互系统的状态下,采集人体部位的三维立体图像,根据人体部位的三维立体图像进行特征提取获取特征参数,其中特征参数包括人体部位的三维坐标以及人体部位的空间运动轨迹,根据人体部位的三维坐标,判断人体部位是否进入预定感应区域,当人体部位进入预定感应区域时,根据人体部位的空间运动轨迹控制进行参数调整。通过这样的方式,不需要依赖于外部输入设备,通过感应人体部位的空间动作,就能控制参数调整,给用户更好的使用体验。The invention collects a three-dimensional stereoscopic image of a human body part in a state of activating a somatosensory interaction system, and extracts feature parameters according to a three-dimensional stereoscopic image of the human body part, wherein the characteristic parameters include a three-dimensional coordinate of the human body part and a spatial motion track of the human body part, According to the three-dimensional coordinates of the human body part, it is determined whether the human body part enters the predetermined sensing area, and when the human body part enters the predetermined sensing area, the parameter adjustment is performed according to the spatial motion trajectory control of the human body part. In this way, it is not necessary to rely on the external input device, and the parameter adjustment can be controlled by sensing the spatial action of the human body part, thereby giving the user a better use experience.
请参阅图3,图3是本发明实施例提供的一种体感交互系统的结构示意图,本实施例的体感交互系统用于执行上述图1所示实施例的体感控制参数调整的方法,如图所示,本实施例的体感交互系统100包括采集模块11、特征提取模块12、判断模块13以及控制模块14,其中:Referring to FIG. 3, FIG. 3 is a schematic structural diagram of a somatosensory interaction system according to an embodiment of the present invention. The somatosensory interaction system of the embodiment is used to perform the method for adjusting the somatosensory control parameters of the embodiment shown in FIG. As shown, the somatosensory interaction system 100 of the present embodiment includes an acquisition module 11, a feature extraction module 12, a determination module 13, and a control module 14, wherein:
采集模块11用于,在体感交互系统激活状态下,采集人体部位的三维立体图像;The collecting module 11 is configured to collect a three-dimensional stereoscopic image of a human body part in an activated state of the somatosensory interaction system;
在本发明实施例中,需要通过体感控制参数调整时,需要先激活体感交互系统。In the embodiment of the present invention, when the somatosensory control parameter adjustment is required, the somatosensory interaction system needs to be activated first.
其中,请参阅图4,图4是本发明实施例提供的另一种体感交互系统的结构示意图,本实施例提供的体感交互系统包括与图3所示实施例提供的体感交互系统相同的功能模块之外,还进一步包括激活模块15,激活模块15用于控制激活体感交互系统。与图3所示相同的功能模块其具体功能实现也相同,具体请参见下述相关描述。Referring to FIG. 4, FIG. 4 is a schematic structural diagram of another somatosensory interaction system according to an embodiment of the present invention. The somatosensory interaction system provided in this embodiment includes the same function as the somatosensory interaction system provided in the embodiment shown in FIG. In addition to the module, an activation module 15 is further included for controlling the activation of the somatosensory interaction system. The same function modules as those shown in Figure 3 have the same specific function implementation. For details, please refer to the related description below.
其中,请进一步参阅图5,图5是本发明实施例提供的激活模块的结构示意图,如图所示,激活模块15包括采集单元151、判断单元152、转化单元153以及激活单元154,其中:For example, please refer to FIG. 5. FIG. 5 is a schematic structural diagram of an activation module according to an embodiment of the present invention. As shown, the activation module 15 includes an acquisition unit 151, a determination unit 152, a conversion unit 153, and an activation unit 154, where:
采集单元151用于采集三维立体图像;The collecting unit 151 is configured to collect a three-dimensional stereoscopic image;
采集单元151通过3D传感器采集预定空间范围内的三维立体图像。所采集的三维立体图像包括3D传感器镜头监控范围的所有物体。比如3D传感器镜头前包括桌子、椅子以及人,那么所采集的三维立体图像包括所有的这些物件。The acquisition unit 151 collects a three-dimensional stereoscopic image within a predetermined spatial range through a 3D sensor. The acquired three-dimensional stereoscopic image includes all objects of the 3D sensor lens monitoring range. For example, the 3D sensor lens includes a table, a chair, and a person, and the acquired three-dimensional image includes all of these objects.
判断单元152用于对三维立体图像进行处理,判断三维立体图像是否包含用于激活体感交互系统的人体部位的三维立体图像,并将判断结果输出给转化单元153。The determining unit 152 is configured to process the three-dimensional stereoscopic image, determine whether the three-dimensional stereoscopic image includes a three-dimensional stereoscopic image for activating a human body part of the somatosensory interactive system, and output the determination result to the conversion unit 153.
判断单元152对3D体感器采集的三维立体图形进行处理,判断该三维立体图像中是否包含用于激活体感交互系统的人体部位的三维立体图像。比如预设的用于激活体感交互系统的人体部位为人手,则从采集的三维立体图像中识别是否包括人手。The determining unit 152 processes the three-dimensional stereoscopic image acquired by the 3D sensor, and determines whether the three-dimensional stereoscopic image includes a three-dimensional stereoscopic image for activating a human body part of the somatosensory interactive system. For example, if the preset human body part for activating the somatosensory interaction system is a human hand, it is recognized whether the human hand is included from the acquired three-dimensional stereoscopic image.
转化单元153用于对人体部位的三维立体图像进行处理,转化为激活指令。The conversion unit 153 is configured to process the three-dimensional stereoscopic image of the human body part and convert it into an activation command.
其中,对人体部位的三维立体图像进行处理,转化为激活指令具体包括:将人体部位的三维立体图像进行特征提取获取特征参数,特征参数包括人体部位的三维坐标以及人体部位的空间运动轨迹,将特征参数与预存的激活体感交互系统的特征参数进行匹配,当匹配度达到预定阈值时,获取与预存的特征参数对应的指令,以作为激活指令。The processing of the three-dimensional image of the human body part into the activation instruction specifically includes: extracting the feature parameters of the three-dimensional image of the human body part, and the characteristic parameters include the three-dimensional coordinates of the human body part and the spatial motion track of the human body part, The feature parameter is matched with the feature parameter of the pre-stored activated somatosensory interaction system. When the matching degree reaches a predetermined threshold, an instruction corresponding to the pre-stored feature parameter is acquired as an activation command.
将采集的人体部位的三维立体图像进行特征提取,获取得到特征参数,特征参数包括位置参数以及动作轨迹参数。位置参数即人体部位所处的空间位置,用三维坐标表示,运动轨迹即人体部位在空间上的运动轨迹。比如一个手掌抓握的动作,其参数提取即包括手掌当前所处的实际三维坐标X、Y、Z的具体数值,以确定手掌与3D传感器的具体位置关系。还包括手掌的空间运动轨迹即抓握的动作轨迹。The three-dimensional images of the collected human body parts are extracted, and the feature parameters are obtained, and the feature parameters include position parameters and motion track parameters. The positional parameter, that is, the spatial position of the human body part, is represented by three-dimensional coordinates, that is, the motion trajectory of the human body part in space. For example, a palm gripping action, the parameter extraction includes the actual three-dimensional coordinates X, Y, Z of the palm of the hand to determine the specific positional relationship between the palm and the 3D sensor. It also includes the spatial motion trajectory of the palm, that is, the motion trajectory of the grip.
在提取得到特征参数以后,提取的特征参数与预存的用于激活体感交互系统的特征参数进行匹配。After the feature parameters are extracted, the extracted feature parameters are matched with the pre-stored feature parameters for activating the somatosensory interaction system.
比如通过一个手掌的动作来激活体感交互系统,预存的手掌的动作的特征参数包括A、B、C,当前采集到一个手掌的三维立体图像,提取得到的特征参数包括A'、B'、C',将A'、B'、C'与A、B、C进行匹配,并判断匹配度是否达到预定阈值。For example, through a palm motion, the somatosensory interaction system is activated. The characteristic parameters of the pre-stored palm motion include A, B, and C. Currently, a three-dimensional image of a palm is collected, and the extracted feature parameters include A', B', and C. ', A', B', C' are matched with A, B, C, and it is judged whether the matching degree reaches a predetermined threshold.
当提取的特征参数与预存的用于激活体感交互系统的特征参数的匹配度达到预定阈值时,获取与预存的特征参数对应的指令,以作为激活指令。预定阈值是预先设置的匹配程度的值,比如可以设置预定阈值为80%,当匹配度达到80%或以上,即获取与预存的特征参数对应的指令,以作为激活指令。When the extracted feature parameter matches the pre-stored feature parameter for activating the somatosensory interaction system to reach a predetermined threshold, an instruction corresponding to the pre-stored feature parameter is acquired as an activation command. The predetermined threshold is a value of a preset matching degree. For example, the predetermined threshold may be set to 80%. When the matching degree reaches 80% or more, an instruction corresponding to the pre-stored characteristic parameter is acquired as an activation instruction.
作为一种优选的实现方案,在提取的特征参数与预存的用于激活体感交互系统的特征参数的匹配度达到预定阈值时,系统可以进一步判断所述三维立体图像中包含用于激活体感交互系统的人体部件的三维立体图像所持续的时间是否达到预定时间。在持续的时间达到预定时间时,才获取与预存的特征参数对应的指令,以作为激活指令。通过这样的方式,可以有效防止误触发激活体感交互系统。As a preferred implementation, when the matching degree of the extracted feature parameter and the pre-stored feature parameter for activating the somatosensory interaction system reaches a predetermined threshold, the system may further determine that the three-dimensional stereoscopic image includes a system for activating the somatosensory interaction system. Whether the three-dimensional image of the human body part lasts for a predetermined time. When the predetermined time reaches the predetermined time, the instruction corresponding to the pre-stored feature parameter is acquired as the activation command. In this way, it is possible to effectively prevent the false trigger from activating the somatosensory interaction system.
比如预设的用于激活体感交行系统的人体部件为手掌。当3D体感器前面用户当前正与另一个人在聊天,过程中可能不经意的会做出一个手掌的动作,系统在采集并识别出这个手掌动作之后,进一步判断立体图像中包含该手掌是否持续预定时间,如果没有持续达到预定时间,可以判断到这只是一个误操作,则不会激活体感交互系统。其中,这里的预定时间可以根据需要预先设定,比如设定为10秒、30秒等等。For example, the preset body part for activating the somatosensory intersection system is the palm of the hand. When the user in front of the 3D sensor is currently chatting with another person, a palm motion may be inadvertently made. After the system collects and recognizes the palm motion, it further determines whether the palm image contains the palm or not. Time, if it does not continue to reach the scheduled time, it can be judged that this is only a misoperation, then the somatosensory interaction system will not be activated. Wherein, the predetermined time here can be preset according to needs, for example, set to 10 seconds, 30 seconds, and the like.
作为一种更进一步的优选方案,当系统采集并识别到人体部位且持续未达到预定时间之前,可以在屏幕上显示一个进度条,以提示体感交互系统激活状态。进度条可以实时的,以图片形式显示系统激活的速度,完成度,剩余未完成任务量的大小,和可能需要处理时间。作为一种可能的实现,进度条可以长方形条状显示。当进度条满时,即表示达到激活体感交互系统条件,激活体感交互系统。通过这样的方式,能够让用户对体感交互系统激活状态的有直观的了解,也能够让误操作的用户及时停止手势动作以避免误触发激活体感交互系统。As a still further preferred solution, a progress bar may be displayed on the screen to prompt the somatosensory interaction system activation state when the system collects and recognizes the human body part and does not reach the predetermined time. The progress bar can display the speed of the system activation, the degree of completion, the amount of remaining unfinished tasks, and the processing time that may be required in real time. As a possible implementation, the progress bar can be displayed in a rectangular strip. When the progress bar is full, it means that the somatosensory interaction system condition is activated and the somatosensory interaction system is activated. In this way, the user can have an intuitive understanding of the activation state of the somatosensory interaction system, and can also cause the user who misunderstands to stop the gesture action in time to avoid false triggering of the activation of the somatosensory interaction system.
激活单元154用于根据激活指令激活体感交互系统。The activation unit 154 is configured to activate the somatosensory interaction system in accordance with the activation instruction.
根据获取的激活指令,激活体感交互系统,以进入体感交互状态。采集模块11在体感交互系统激活状态下,采集进入感应区域的人体部位的三维立体图像。According to the acquired activation instruction, the somatosensory interaction system is activated to enter the somatosensory interaction state. The acquisition module 11 collects a three-dimensional stereoscopic image of a human body part entering the sensing area under the activation state of the somatosensory interaction system.
在体感交互系统激活状态下,通过3D红外传感器采集预定空间范围内人体部位的三维立体图像。3D红外传感器能够采集空间位置上物件的三维立体图像,所采集的三维立体图像包括物件所处的空间位置坐标以及空间运动轨迹。In the activated state of the somatosensory interaction system, a three-dimensional stereoscopic image of the human body part within a predetermined spatial range is acquired by the 3D infrared sensor. The 3D infrared sensor can collect a three-dimensional image of the object in the spatial position, and the acquired three-dimensional image includes the spatial position coordinates of the object and the spatial motion track.
其中,本发明实施例所提到的人体部位可以是人手。当然也可以是其他的用于操作的人体部位比如人脸、人脚等等。The human body part mentioned in the embodiment of the present invention may be a human hand. Of course, it can also be other human body parts for operation such as a human face, a human foot, and the like.
特征提取模块12用于将人体部位的三维立体图像进行特征提取获取特征参数,特征参数包括人体部位的三维坐标以及人体部位的空间运动轨迹;The feature extraction module 12 is configured to perform feature extraction on the three-dimensional image of the human body part to acquire feature parameters, where the feature parameters include three-dimensional coordinates of the human body part and spatial motion trajectories of the human body part;
特征提取模块12将所采集的人体部位的三维立体图像进行特征提取,以获取该采集的三维立体空间的特征参数,其中,这些特征参数包括该人体部位所处的空间三维坐标以及人体部位的空间运动轨迹。通过特征提取,能够识别人体部位距离3D传感器的具体空间位置以及人体部位的动作。比如人手做的一个抓握的动作,通过采集该抓握的立体图像,并通过特征提取,就能根据该特征提取的参数确定这个人手的具体空间位置并识别出该动作为一个抓握的动作。The feature extraction module 12 performs feature extraction on the collected three-dimensional image of the human body part to obtain the feature parameters of the acquired three-dimensional space, wherein the feature parameters include the spatial three-dimensional coordinates of the human body part and the space of the human body part. Movement track. By feature extraction, it is possible to recognize the specific spatial position of the body part from the 3D sensor and the action of the body part. For example, a gripping action by a human hand, by collecting the stereoscopic image of the grip, and by feature extraction, can determine the specific spatial position of the human hand according to the parameters extracted by the feature and recognize that the motion is a gripping action. .
判断模块13用于根据人体部位的三维坐标,判断人体部位是否进入预定感应区域;The determining module 13 is configured to determine whether the human body part enters the predetermined sensing area according to the three-dimensional coordinates of the human body part;
判断模块13从提取的特征参数中的人体部位的三维坐标,判断人体部位是否进入预定感应区域。其中,这里的预定感应区域可以是预先设置的一定空间范围。比如设置屏幕左侧预定面积对应的空间区域为预定感应区域,在该预定感应区域之外的空间范围的动作不做响应。或者当需要通过不同的体感动作执行不同的参数调整时,可以分别设置不同的预定感应区域对应不同的参数调整,比如设置屏幕上侧预定空间范围对应感应并响应进行亮度调节,屏幕下侧预定空间范围对应感应并响应音量调节等等。当然,也可以不做设定,这种情况下,3D体感器所能采集到立体图像的空间范围都为预定感应区域,只根据不同的动作响应进行不同的操作。The judging module 13 judges whether the human body part enters the predetermined sensing area from the three-dimensional coordinates of the human body part among the extracted feature parameters. Wherein, the predetermined sensing area here may be a predetermined spatial range. For example, the spatial area corresponding to the predetermined area on the left side of the screen is set as the predetermined sensing area, and the action of the spatial range outside the predetermined sensing area does not respond. Or when different parameter adjustments need to be performed through different somatosensory actions, different predetermined sensing regions may be respectively set to correspond to different parameter adjustments, such as setting a predetermined spatial range on the upper side of the screen corresponding to the sensing and responding to the brightness adjustment, and the predetermined space on the lower side of the screen. The range corresponds to the sense and responds to volume adjustments and more. Of course, the setting may not be made. In this case, the spatial range in which the stereoscopic image can be acquired by the 3D sensor is a predetermined sensing area, and different operations are performed only according to different motion responses.
控制模块14用于,当人体部位进入预定感应区域时,控制根据人体部位的空间运动轨迹进行参数调整。The control module 14 is configured to control parameter adjustment according to a spatial motion trajectory of the human body part when the human body part enters the predetermined sensing area.
当人体部位进入预定感应区域时,控制模块14根据特征提取参数中人体部位的空间运动轨迹进行参数调整。这里的参数调整可以但不限于是音量调节、屏幕亮度调节、屏幕滚动速度调节中的至少一种。When the human body part enters the predetermined sensing area, the control module 14 performs parameter adjustment according to the spatial motion trajectory of the human body part in the feature extraction parameter. The parameter adjustment herein may be, but is not limited to, at least one of volume adjustment, screen brightness adjustment, and screen scroll speed adjustment.
其中,根据人体部位的空间运动轨迹进行参数调整具体包括:将人体部位的空间运动轨迹与预存的空间运动轨迹进行匹配,当匹配达到预定匹配阈值时,控制执行与预存的空间运动轨迹对应的参数调整。The parameter adjustment according to the spatial motion trajectory of the human body part specifically includes: matching the spatial motion trajectory of the human body part with the pre-stored spatial motion trajectory, and when the matching reaches the predetermined matching threshold value, controlling the parameter corresponding to the pre-stored spatial motion trajectory is controlled. Adjustment.
比如预先设置手掌向上推动的动作对应调高音量,手掌向下推动的动作对应调低音量,当采集到手掌向上推动的动作时即控制将音量调高,采集到手掌向下推动的动作即控制将音量调低。For example, the action of pushing the palm up is correspondingly to increase the volume, and the action of pushing the palm downward corresponds to the volume of the low volume. When the action of pushing the palm upward is controlled, the volume is turned up, and the action of pulling the palm downward is controlled. Turn the volume down.
当人体部位离开预定感应区域时,体感交互系统进入锁定状态,只有再次激活体感交互系统后才能进入体感控制参数调整的交互。When the body part leaves the predetermined sensing area, the somatosensory interaction system enters the locked state, and the interaction of the somatosensory control parameter adjustment can only be entered after the somatosensory interaction system is activated again.
其中,请继续参阅图4,本实施例的体感交互系统还可以进一步包括显示模块16,显示模块16用于,当人体部位进入预定感应区域时,在屏幕上显示与人体部位同步移动的图标。4, the somatosensory interaction system of the present embodiment may further include a display module 16 for displaying an icon moving synchronously with the human body part on the screen when the human body part enters the predetermined sensing area.
其中,与人体部位同步移动的图标可以是跟所述人体部位相似的图标,比如人体部位是人手,该图标可以是一个手形状的图标。当然,也可以是其他形式的图标,比如三角图标,圆点图标等。该屏幕上的图标跟随人体部位的移动而在屏幕上对应移动。比如人手在空间上向右移动,图标也跟随在屏幕上向右移动。The icon that moves synchronously with the human body part may be an icon similar to the human body part, for example, the human body part is a human hand, and the icon may be a hand shape icon. Of course, it can be other forms of icons, such as triangle icons, dot icons, and so on. The icon on the screen follows the movement of the body part and moves correspondingly on the screen. For example, the human hand moves to the right in space, and the icon also moves to the right along the screen.
显示模块16还用于,当人体部位进入预定感应区域时,在屏幕上显示相应的提示。The display module 16 is further configured to display a corresponding prompt on the screen when the body part enters the predetermined sensing area.
其中,当人体部位进入预定感应区域时,可以给用户相应的提示。作为一种可能的实现方式,显示模块16可以将预定感应区域以高亮状态进行显示以提示用户可以在该预定感应区域内执行控制动作。当然,预定感应区域也可以以别的方式提示用户。比如突出显示,闪动显示或者显示条纹等等。Wherein, when the human body part enters the predetermined sensing area, the user may be prompted accordingly. As a possible implementation manner, the display module 16 may display the predetermined sensing area in a highlighted state to prompt the user to perform a control action within the predetermined sensing area. Of course, the predetermined sensing area can also prompt the user in other ways. Such as highlighting, flashing display or display stripes and so on.
在本发明实施例中,还可以通过预设不同的感应区域,并设置不同感应区不同的动作对应不同的参数调整。比如设置屏幕左侧预定区域感应区手掌向上运动控制音量调大,手掌向下运动控制音量调小,屏幕右侧预定感应区握拳动作向上移动屏幕亮度调大,握拳动作向下移动控制屏幕亮度调小。当在屏幕左侧预定感应区检测到手掌并且手掌向上运动时将音量调大,手掌向下运动控制音量调小。当在屏幕右侧的预定感应区检测到握拳动作向上移动控制屏幕变亮,当检测到握拳动作向下移动控制屏幕变暗。当然,在这样的设置条件下,如果在屏幕左侧预定区域检测到握拳的动作,或在屏幕右侧预定区域检测到手掌时,不进行响应。In the embodiment of the present invention, different parameters of different sensing regions may be preset, and different actions of different sensing regions may be adjusted corresponding to different parameters. For example, setting the area on the left side of the screen to the sensing area, the palm up motion control volume is adjusted, the palm down motion control volume is small, the screen on the right side of the screen, the sensing area, the fist movement, the upward movement, the screen brightness is increased, and the fist movement is moved downward to control the screen brightness adjustment. small. When the palm is detected in the predetermined sensing area on the left side of the screen and the palm moves upward, the volume is turned up, and the palm down motion control volume is turned down. When the gripping action is detected in the predetermined sensing area on the right side of the screen, the upward movement control screen is brightened, and when the gripping action is detected, the downward movement control screen is darkened. Of course, under such setting conditions, if the action of clenching is detected in a predetermined area on the left side of the screen, or when the palm is detected in a predetermined area on the right side of the screen, no response is made.
在不设置预定区域对应不同的操作时,可以只设置不同的动作对应不同的操作,3D体感器所能采集到立体图像的区域范围都是感应区,只要人体部位进入感应区并感应到动作,并且与预设的动作匹配达到预定阈值,即执行与动作对应的操作。When different operations are not set in the predetermined area, only different actions may be set to correspond to different operations, and the range of the 3D body sensor capable of acquiring the stereoscopic image is the sensing area, as long as the human body part enters the sensing area and senses the action, And matching with the preset action reaches a predetermined threshold, that is, performing an operation corresponding to the action.
在本发明实施例所提供的体感交互系统的基础上,本发明实施例进一步提供一种电子设备,该电子设备包括上述实施例所述的体感交互系统。其中,电子设备可以但不限于是智能电视、智能手机、平板电脑、笔记本电脑等。Based on the somatosensory interaction system provided by the embodiment of the present invention, the embodiment of the present invention further provides an electronic device, where the electronic device includes the somatosensory interaction system described in the foregoing embodiment. The electronic device can be, but is not limited to, a smart TV, a smart phone, a tablet computer, a notebook computer, and the like.
上述本发明实施例所提供的体感交互系统,在激活体感交互系统的状态下,采集人体部位的三维立体图像,根据人体部位的三维立体图像进行特征提取获取特征参数,其中特征参数包括人体部位的三维坐标以及人体部位的空间运动轨迹,根据人体部位的三维坐标,判断人体部位是否进入预定感应区域,当人体部位进入预定感应区域时,根据人体部位的空间运动轨迹控制进行参数调整。通过这样的方式,不需要依赖于外部输入设备,通过感应人体部位的空间动作,就能控制参数调整,给用户更好的使用体验。The somatosensory interaction system provided by the embodiment of the present invention collects a three-dimensional stereoscopic image of a human body part in a state of activating a somatosensory interaction system, and extracts characteristic parameters according to a three-dimensional stereoscopic image of the human body part, wherein the characteristic parameters include a human body part. The three-dimensional coordinates and the spatial motion trajectory of the human body part determine whether the human body part enters the predetermined sensing area according to the three-dimensional coordinates of the human body part. When the human body part enters the predetermined sensing area, the parameter adjustment is performed according to the spatial motion trajectory control of the human body part. In this way, it is not necessary to rely on the external input device, and the parameter adjustment can be controlled by sensing the spatial action of the human body part, thereby giving the user a better use experience.
在本发明所提供的几个实施例中,应该理解到,所揭露的系统,装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。In the several embodiments provided by the present invention, it should be understood that the disclosed system, apparatus, and method may be implemented in other manners. For example, the device embodiments described above are merely illustrative. For example, the division of the modules or units is only a logical function division. In actual implementation, there may be another division manner, for example, multiple units or components may be used. Combinations can be integrated into another system, or some features can be ignored or not executed. In addition, the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, and may be in an electrical, mechanical or other form.
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。The units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
另外,在本发明各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。In addition, each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit. The above integrated unit can be implemented in the form of hardware or in the form of a software functional unit.
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)或处理器(processor)执行本发明各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。The integrated unit, if implemented in the form of a software functional unit and sold or used as a standalone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention, which is essential or contributes to the prior art, or all or part of the technical solution, may be embodied in the form of a software product stored in a storage medium. A number of instructions are included to cause a computer device (which may be a personal computer, server, or network device, etc.) or a processor to perform all or part of the steps of the methods described in various embodiments of the present invention. The foregoing storage medium includes: a U disk, a mobile hard disk, a read only memory (ROM, Read-Only) Memory, random access memory (RAM), disk or optical disk, and other media that can store program code.
以上所述仅为本发明的实施例,并非因此限制本发明的专利范围,凡是利用本发明说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本发明的专利保护范围内。The above is only the embodiment of the present invention, and is not intended to limit the scope of the invention, and the equivalent structure or equivalent process transformation of the present invention and the contents of the drawings may be directly or indirectly applied to other related technologies. The fields are all included in the scope of patent protection of the present invention.

Claims (13)

  1. 一种体感控制参数调整的方法,其特征在于,所述方法包括:A method for adjusting a somatosensory control parameter, characterized in that the method comprises:
    在体感交互系统激活状态下,采集人体部位的三维立体图像;Collecting a three-dimensional image of a human body part in an activated state of the somatosensory interaction system;
    将所述人体部位的三维立体图像进行特征提取获取特征参数,所述特征参数包括所述人体部位的三维坐标以及所述人体部位的空间运动轨迹;Performing feature extraction on the three-dimensional image of the human body part to obtain a feature parameter, where the feature parameter includes a three-dimensional coordinate of the human body part and a spatial motion track of the human body part;
    根据所述人体部位的三维坐标,判断所述人体部位是否进入预定感应区域;Determining, according to the three-dimensional coordinates of the human body part, whether the human body part enters a predetermined sensing area;
    当所述人体部位进入预定感应区域时,根据所述人体部位的空间运动轨迹进行参数调整。When the human body part enters the predetermined sensing area, the parameter adjustment is performed according to the spatial motion trajectory of the human body part.
  2. 根据权利要求1所述的方法,其特征在于,所述参数包括音量、亮度、屏幕滚动速度的至少一种。The method of claim 1 wherein said parameters comprise at least one of volume, brightness, and screen scroll speed.
  3. 根据权利要求1所述的方法,其特征在于,所述方法还包括:The method of claim 1 further comprising:
    当所述人体部位进入预定感应区域时,在屏幕上显示与所述人体部位同步移动的图标。When the body part enters the predetermined sensing area, an icon that moves in synchronization with the body part is displayed on the screen.
  4. 根据权利要求1所述的方法,其特征在于,所述方法还包括:The method of claim 1 further comprising:
    当所述人体部位进入预定感应区域时,屏幕显示相应的提示。When the body part enters the predetermined sensing area, the screen displays a corresponding prompt.
  5. 根据权利要求1所述的方法,其特征在于,所述人体部位为人手。The method of claim 1 wherein said body part is a human hand.
  6. 一种体感交互系统,其特征在于,所述系统包括采集模块、特征提取模块、判断模块以及控制模块,其中:A somatosensory interaction system, characterized in that the system comprises an acquisition module, a feature extraction module, a judgment module and a control module, wherein:
    所述采集模块用于在体感交互系统激活状态下,采集人体部位的三维立体图像;The collecting module is configured to collect a three-dimensional stereoscopic image of a human body part in an activated state of the somatosensory interaction system;
    所述特征提取模块用于将所述人体部位的三维立体图像进行特征提取获取特征参数,所述特征参数包括所述人体部位的三维坐标以及所述人体部位的空间运动轨迹;The feature extraction module is configured to perform feature extraction on the three-dimensional image of the human body part to obtain a feature parameter, where the feature parameter includes a three-dimensional coordinate of the human body part and a spatial motion track of the human body part;
    所述判断模块用于根据所述人体部位的三维坐标,判断所述人体部位是否进入预定感应区域;The determining module is configured to determine, according to the three-dimensional coordinates of the human body part, whether the human body part enters a predetermined sensing area;
    所述控制模块用于,当所述人体部位进入预定感应区域时,根据所述人体部位的空间运动轨迹进行参数调整。The control module is configured to perform parameter adjustment according to a spatial motion trajectory of the human body part when the human body part enters a predetermined sensing area.
  7. 根据权利要求6所述的系统,其特征在于,所述参数包括音量、亮度、屏幕滚动速度的至少一种。The system of claim 6 wherein said parameters comprise at least one of volume, brightness, and screen scroll speed.
  8. 根据权利要求6所述的系统,其特征在于,所述系统还包括显示模块,所述显示模块用于,当所述人体部位进入预定感应区域时,在屏幕上显示与所述人体部位同步移动的图标。The system according to claim 6, wherein the system further comprises a display module, wherein the display module is configured to display a synchronous movement with the human body part on the screen when the human body part enters the predetermined sensing area Icon.
  9. 根据权利要求6所述的系统,其特征在于,所述显示模块还用于当所述人体部位进入预定感应区域时,在屏幕显示相应的提示。The system according to claim 6, wherein the display module is further configured to display a corresponding prompt on the screen when the human body part enters the predetermined sensing area.
  10. 一种电子设备,其特征在于,所述电子设备包括体感交互系统,所述体感交互系统包括采集模块、特征提取模块、判断模块以及控制模块,其中:An electronic device, comprising: a somatosensory interaction system, the somatosensory interaction system comprising an acquisition module, a feature extraction module, a determination module, and a control module, wherein:
    所述采集模块用于在体感交互系统激活状态下,采集人体部位的三维立体图像;The collecting module is configured to collect a three-dimensional stereoscopic image of a human body part in an activated state of the somatosensory interaction system;
    所述特征提取模块用于将所述人体部位的三维立体图像进行特征提取获取特征参数,所述特征参数包括所述人体部位的三维坐标以及所述人体部位的空间运动轨迹;The feature extraction module is configured to perform feature extraction on the three-dimensional image of the human body part to obtain a feature parameter, where the feature parameter includes a three-dimensional coordinate of the human body part and a spatial motion track of the human body part;
    所述判断模块用于根据所述人体部位的三维坐标,判断所述人体部位是否进入预定感应区域;The determining module is configured to determine, according to the three-dimensional coordinates of the human body part, whether the human body part enters a predetermined sensing area;
    所述控制模块用于,当所述人体部位进入预定感应区域时,根据所述人体部位的空间运动轨迹进行参数调整。The control module is configured to perform parameter adjustment according to a spatial motion trajectory of the human body part when the human body part enters a predetermined sensing area.
  11. 根据权利要求10所述的电子设备,其特征在于,所述参数包括音量、亮度、屏幕滚动速度的至少一种。The electronic device of claim 10, wherein the parameter comprises at least one of volume, brightness, and screen scroll speed.
  12. 根据权利要求10所述的电子设备,其特征在于,所述系统还包括显示模块,所述显示模块用于,当所述人体部位进入预定感应区域时,在屏幕上显示与所述人体部位同步移动的图标。The electronic device according to claim 10, wherein the system further comprises a display module, wherein the display module is configured to synchronize with the human body part on the screen when the human body part enters the predetermined sensing area Mobile icon.
  13. 根据权利要求10所述的电子设备,其特征在于,所述显示模块还用于当所述人体部位进入预定感应区域时,在屏幕显示相应的提示。The electronic device according to claim 10, wherein the display module is further configured to display a corresponding prompt on the screen when the human body part enters the predetermined sensing area.
PCT/CN2016/076777 2015-06-05 2016-03-18 Motion sensing control parameter adjustment method, motion sensing interaction system and electronic device WO2016192440A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201510307213.3A CN104915003A (en) 2015-05-29 2015-06-05 Somatosensory control parameter adjusting method, somatosensory interaction system and electronic equipment
CN201510307213.3 2015-06-05

Publications (1)

Publication Number Publication Date
WO2016192440A1 true WO2016192440A1 (en) 2016-12-08

Family

ID=57445764

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/076777 WO2016192440A1 (en) 2015-06-05 2016-03-18 Motion sensing control parameter adjustment method, motion sensing interaction system and electronic device

Country Status (1)

Country Link
WO (1) WO2016192440A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102043320A (en) * 2009-10-21 2011-05-04 陕西金合泰克信息科技发展有限公司 Overhead infrared page turning image book and infrared page turning method thereof
CN103246351A (en) * 2013-05-23 2013-08-14 刘广松 User interaction system and method
CN104881122A (en) * 2015-05-29 2015-09-02 深圳奥比中光科技有限公司 Somatosensory interactive system activation method and somatosensory interactive method and system
CN104915003A (en) * 2015-05-29 2015-09-16 深圳奥比中光科技有限公司 Somatosensory control parameter adjusting method, somatosensory interaction system and electronic equipment
CN104915004A (en) * 2015-05-29 2015-09-16 深圳奥比中光科技有限公司 Somatosensory control screen rolling method, somatosensory interaction system and electronic equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102043320A (en) * 2009-10-21 2011-05-04 陕西金合泰克信息科技发展有限公司 Overhead infrared page turning image book and infrared page turning method thereof
CN103246351A (en) * 2013-05-23 2013-08-14 刘广松 User interaction system and method
CN104881122A (en) * 2015-05-29 2015-09-02 深圳奥比中光科技有限公司 Somatosensory interactive system activation method and somatosensory interactive method and system
CN104915003A (en) * 2015-05-29 2015-09-16 深圳奥比中光科技有限公司 Somatosensory control parameter adjusting method, somatosensory interaction system and electronic equipment
CN104915004A (en) * 2015-05-29 2015-09-16 深圳奥比中光科技有限公司 Somatosensory control screen rolling method, somatosensory interaction system and electronic equipment

Similar Documents

Publication Publication Date Title
WO2016192438A1 (en) Motion sensing interaction system activation method, and motion sensing interaction method and system
CN102915111B (en) A kind of wrist gesture control system and method
WO2017118075A1 (en) Human-machine interaction system, method and apparatus
TWI489317B (en) Method and system for operating electric apparatus
WO2013183938A1 (en) User interface method and apparatus based on spatial location recognition
WO2014030902A1 (en) Input method and apparatus of portable device
US20150338651A1 (en) Multimodal interation with near-to-eye display
WO2017126741A1 (en) Hmd device and method for controlling same
WO2018076912A1 (en) Virtual scene adjusting method and head-mounted intelligent device
TW200945174A (en) Vision based pointing device emulation
CN103105930A (en) Non-contact type intelligent inputting method based on video images and device using the same
WO2017211056A1 (en) One-handed operating method and system for mobile terminal
WO2016107231A1 (en) System and method for inputting gestures in 3d scene
US20240077948A1 (en) Gesture-based display interface control method and apparatus, device and storage medium
US10372223B2 (en) Method for providing user commands to an electronic processor and related processor program and electronic circuit
CN102880304A (en) Character inputting method and device for portable device
CN106681503A (en) Display control method, terminal and display device
WO2019004686A1 (en) Keyboard input system and keyboard input method using finger gesture recognition
CN104915004A (en) Somatosensory control screen rolling method, somatosensory interaction system and electronic equipment
CN105242776A (en) Control method for intelligent glasses and intelligent glasses
WO2020159302A1 (en) Electronic device for performing various functions in augmented reality environment, and operation method for same
CN104915003A (en) Somatosensory control parameter adjusting method, somatosensory interaction system and electronic equipment
US10444831B2 (en) User-input apparatus, method and program for user-input
WO2016192440A1 (en) Motion sensing control parameter adjustment method, motion sensing interaction system and electronic device
WO2016192439A1 (en) Motion-sensing screen-scroll control method, motion sensing interaction system and electronic device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16802363

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16802363

Country of ref document: EP

Kind code of ref document: A1