WO2019169644A1 - Procédé et dispositif d'entrée de signal - Google Patents

Procédé et dispositif d'entrée de signal Download PDF

Info

Publication number
WO2019169644A1
WO2019169644A1 PCT/CN2018/078642 CN2018078642W WO2019169644A1 WO 2019169644 A1 WO2019169644 A1 WO 2019169644A1 CN 2018078642 W CN2018078642 W CN 2018078642W WO 2019169644 A1 WO2019169644 A1 WO 2019169644A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
limb
operation instruction
hand
input signal
Prior art date
Application number
PCT/CN2018/078642
Other languages
English (en)
Chinese (zh)
Inventor
宋卿
葛凯麟
Original Assignee
彼乐智慧科技(北京)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 彼乐智慧科技(北京)有限公司 filed Critical 彼乐智慧科技(北京)有限公司
Priority to PCT/CN2018/078642 priority Critical patent/WO2019169644A1/fr
Priority to CN201880091030.4A priority patent/CN112567319A/zh
Publication of WO2019169644A1 publication Critical patent/WO2019169644A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/02Input arrangements using manually operated switches, e.g. using keyboards or dials

Definitions

  • the present invention belongs to the field of information technology, and in particular, to a method and device for signal input.
  • the input mode of the user's signals mainly includes touch, button, and voice control.
  • touch button
  • voice control voice control
  • the user needs to manually type a key, or press a virtual button on the touch screen.
  • an input device such as a dance mat
  • the user is required to press the corresponding button on the dance mat with the foot.
  • the invention provides a method and a device for signal input, which solves the problem that the input mode of human-computer interaction signals in the prior art is single and inefficient.
  • the present invention provides an apparatus for signal input, including:
  • buttons One or more buttons
  • One or more processors for:
  • the limb is a user's hand
  • the processor is configured to perform feature recognition on the limb image, including:
  • the hand shape is identified, confirming that the hand is a left or right hand, and/or,
  • a corresponding operation instruction is generated.
  • the signal input device further includes: a pressure sensor, configured to acquire a strength of the user pressing the one or more buttons;
  • processor is further used to:
  • the signal input device further includes a fingerprint sensor, configured to collect a user fingerprint and identify the user identity;
  • processor is further used to:
  • the image sensor is further configured to: collect face information
  • the processor is further configured to:
  • the corresponding operation instruction is generated by combining the user identity information, the input signal, and the correspondence between the recognition result and the operation instruction.
  • the signal input device further includes a laser emitter for continuously emitting laser light, the laser emitter is at an angle to the image sensor, and the image sensor is a laser sensor, when the laser is emitted When the laser is emitted to the user's limb, the image sensor receives the reflected reflected laser light to generate one or more reflected light response signals;
  • the processor is further configured to:
  • the image sensor receives the reflected light in different directions, receiving the plurality of reflected light response signals, and calculating according to the reflected light response signal by using a trigonometric method a distance between the image sensor and a different slice of the user's limb;
  • the corresponding operation instruction is generated by combining the gesture information, the distance information, the input signal, and the correspondence between the recognition result and the operation instruction.
  • the characterizing the limb image further includes:
  • the limb features corresponding to the color blocks are determined, and the recognition result is output.
  • the processor is further configured to: when the user operates the glove equipped with the sensor chip, receive the sensing signal sent by the glove;
  • the embodiment of the invention further provides a method for signal input, comprising:
  • the signal input device captures a movement track of the user's limb, and collects the limb image, and the user's limb includes the user's limbs;
  • the limb is a user's hand
  • the limb image is characterized, including:
  • the hand shape is identified, confirming that the hand is a left or right hand, and/or,
  • a corresponding operation instruction is generated.
  • the method further includes:
  • the method further includes:
  • Performing face recognition or fingerprint recognition acquiring user identity information, combining the user identity information, the input signal, and the correspondence between the recognition result and the operation instruction to generate the corresponding operation instruction.
  • the method further includes:
  • the corresponding operation instruction is generated by combining the gesture information, the distance information, the input signal, and the correspondence between the recognition result and the operation instruction.
  • the characterizing the limb image further includes:
  • the limb features corresponding to the color blocks are determined, and the recognition result is output.
  • the method further includes:
  • the signal input device synchronously or asynchronously collects the user's limb information and the key input signal, and outputs a corresponding operation instruction according to the recognized relationship between the user's limb information, the key input signal, and the operation instruction.
  • the technical solution provided by the present invention which hand/foot can be used more carefully for the user, which finger is used, which gesture is used to press the button, and different body information presses the same button.
  • the response signals generated are different, and different limb information can be combined with different buttons to define a large number of shortcut operations. That is, the present invention defines a completely new way of interaction that enables fast operation of operational instructions. Compared with the prior art, the invention improves the signal input efficiency, enriches the signal input mode, and improves the user experience.
  • FIG. 1 is a schematic structural diagram of a signal input device in an embodiment of the present invention.
  • FIG. 2 is a schematic diagram of identifying left and right hand pressing buttons in an embodiment of the present invention
  • FIG. 3 is a schematic diagram of identifying a specific finger pressing button in the embodiment of the present invention.
  • FIG. 4 is a schematic diagram of gesture recognition in an embodiment of the present invention.
  • FIG. 5 is a schematic diagram of performing three-dimensional modeling after two-dimensional ranging in an embodiment of the present invention.
  • Figure 6 is a flow chart of a signal input method in an embodiment of the present invention.
  • the present invention provides a device 11 for signal input, the device comprising:
  • the sensor 102 is controlled to capture a trajectory of a user's limb movement, and the limb image is acquired.
  • the user's limb includes the user's limbs; that is, the user's hands or feet are included.
  • the button 101 can be a physical button or a virtual button.
  • the number of buttons is not limited.
  • the button may be one or more buttons on a conventional numeric keypad, or may be a single button, or may be one or more displayed on the touch screen. Virtual buttons.
  • the button may also be an entire or partial touch screen or a display screen, and the user touches the button regardless of which position of the display is pressed.
  • the user performs finger painting on the drawing display area in the touch screen. At this time, it is determined that the user has pressed the drawing button, and obtains the position information of the touch point after the user presses the button.
  • the senor 102 may be an image sensor of visible light or non-visible light, such as a CCD or CMOS image sensor, or an infrared/ultraviolet sensor for receiving infrared/ultraviolet light.
  • the processor 103 is configured to perform feature recognition on the limb image.
  • Positioning the hand to identify a positional relationship between the hand position and the one or more buttons for example, the hand position may be directly above the button, or may be on the left or right side of the button;
  • the method for identifying the positioning hand can be implemented by a conventional image processing method, for example, an image processing algorithm using binarization and contour recognition to obtain the shape of the hand in an image and the position in the photo, due to the image
  • the sensor can be fixed for periodic shooting in the same place, so the background of the photo (other than the hand can be defined as the background) is constant, the only change is the position of the hand in the photo, so it can be different A change in the hand in the photo to locate the hand and identify the movement of the hand.
  • the hand shape is identified to confirm that the hand is a left hand or a right hand, for example, when the hand position is above the button, The shape of the opponent is recognized to distinguish whether the user is pressing the left hand or the right hand, or has pressed the button, and in combination with the signal that the button is pressed, a different operation command response is performed when the user presses the left or right hand.
  • the operation command can be the output of a text ⁇ sound ⁇ image, or it can be a command of a certain program, which can be customized by the user or preset by the signal input device. For example, when the user presses the button with the left hand, a piece of text is output, and when the user presses the right hand, a piece of sound is output.
  • FIG. 2 is a schematic diagram of recognizing the left and right hands. As shown in FIG. 2, when the user prepares to press the button 101 on the keyboard 100, the image sensor 102 collects the opponent in real time, and distinguishes whether the user presses the button or is the left or right hand before. In operation.
  • the signal input device may preset a correspondence table. In the correspondence table, different fingers and different buttons may be arranged and combined, and the output operation instructions are different.
  • the index finger and the thumb, buttons A and B can form 7 different states, respectively, no pressing, single index finger pressing A button, single index finger pressing B button, single thumb pressing A button, single thumb pressing B button, index finger pressing
  • the A button presses the B button
  • the thumb presses the A button corresponding to 6 different operation commands (the A and B buttons are not pressed without an operation command), and each operation command can be defined or preset in advance.
  • the thumb of the left hand and the thumb of the right hand press the same button, and the operational response can also be different. That is, the left and right hands, the index finger and the thumb, and the buttons A and B can also be combined into a more complex correspondence table to output more complex operational responses.
  • 3 is a specific example of a user typing on a keyboard.
  • the image sensor 102 captures the hand motion track and the hand image in real time, thereby distinguishing the finger used by the user when pressing the keyboard, and outputting Different response commands are displayed on display 105.
  • the embodiment of the present invention can also distinguish the hand movement trajectory, as mentioned above, moving from top to bottom to above the button, moving from bottom to top. Above the button, move from obliquely up to obliquely down to the top of the button, etc., the operation instructions corresponding to different moving directions can also be different.
  • the gesture of the hand is recognized; in addition to the above-mentioned left and right hand recognition, finger recognition, and hand movement trajectory recognition, the embodiment of the present invention can also implement gesture recognition.
  • Gesture recognition can be similar to Apple's multi-touch technology, such as kneading (reduction/magnification instructions, as shown in Figure 4), multi-finger rotation (picture rotation instructions) and other interactive methods.
  • the present invention does not need to capture a multi-point moving track on the touch screen, and can capture the multi-frame picture to identify the hand shape and shape change of the multi-frame picture, thereby determining the current user. Gesture.
  • the recognition gesture technology is a comprehensive technology that combines finger recognition + finger movement trajectory recognition.
  • the gesture recognition implementation can be identified according to the existing machine learning algorithm, and will not be described here.
  • the processor is configured to generate an operation instruction corresponding to the combination of the result and the input signal, and specifically:
  • a corresponding operation instruction is generated.
  • the typical button such as the button on the dance mat
  • the typical button can position the foot through the image sensor to distinguish whether the user is stepping on a button by the left or right foot.
  • it can also be moved according to the footstep.
  • the trajectory determines the movement trajectory of the footstep when the button is currently pressed, for example, from top to bottom, or from left to right, or from obliquely to obliquely, and the operation instructions are different in different directions.
  • the new interaction method defined by the present invention is widely used.
  • the game controller usually only has a few buttons, and different gestures/finger combinations, pressing different buttons, will bring different game characters to the shortcut.
  • the key the game has high playability and good user experience; in the field of education, when the user selects different fingers/gestures to click/press/touch different buttons, different teaching contents or teaching effects can be triggered, for example, drawing, the user uses the index finger Applying on the LCD screen and applying it on the LCD screen with your thumb, the color and thickness of the lines drawn can be different.
  • the signal input device further includes: a pressure sensor, configured to acquire a strength of the user pressing the one or more buttons;
  • processor is further used to:
  • the application-level pressure sensor has been widely used in the market, and the pressure sensor (for example, the pressure sensor is placed inside the button) can be built in the embodiment of the invention to collect the force when the user presses the case, according to different force thresholds. It is divided into different levels of force, such as low, medium and high. Each level of force can correspond to different operating instructions. Similar to Apple's 3D-touch.
  • the signal input device further includes a fingerprint sensor, configured to collect a user fingerprint and identify the user identity;
  • processor is further used to:
  • the invention may be coupled with the pressure sensor and/or Or a fingerprint sensor, the fingerprint sensor can also be built in and inside the button.
  • the fingerprint is automatically recognized, thereby determining which user is operating, and combining the user identification information, the input signal, and the recognition result with Corresponding relationship of the operation instructions, the corresponding operation instruction is generated.
  • the image sensor is further configured to: collect face information
  • the processor is further configured to:
  • the corresponding operation instruction is generated by combining the user identity information, the input signal, and the corresponding relationship between the recognition result and the operation instruction.
  • the difference is that the latter is a method of face recognition by the image sensor 102 to realize user identity collection. Face recognition belongs to the prior art, and the specific implementation manner is not repeated.
  • the face recognition can be applied to the voting aspect, such as an election, an entertainment program, or an application scenario such as a public vote in other programs, and the current voting system may have malicious malicious investment and missed voting behavior.
  • the voting aspect such as an election, an entertainment program, or an application scenario such as a public vote in other programs
  • the current voting system may have malicious malicious investment and missed voting behavior.
  • face recognition and button press voting it is possible to locate which user has pressed the current button, and the number of votes is accurately matched with the user, which is good for statistics.
  • the signal input device further includes a laser emitter for continuously emitting laser light, the laser emitter is at an angle to the image sensor, and the image sensor is a laser sensor when the laser light is emitted
  • the image sensor receives the reflected reflected laser light to generate one or more reflected light response signals;
  • the laser emitter can emit a lattice light or a linear light beam, for example, through a built-in
  • the beam expander expands the lattice light into one or more linear beams.
  • the linear beam has more data to be collected than the lattice light, so the ranging is more accurate. Therefore, the embodiment of the present invention is preferably a linear beam.
  • the processor is further configured to:
  • the method uses a two-dimensional ranging technique to measure the distance between the image sensor and the user's limb.
  • the principle of two-dimensional ranging is to send a point beam or a linear beam to the user's limb through a laser, receive the reflected light emitted by the laser to the user's limb through an image sensor (for example, an infrared sensor), and calculate the current moment by using a trigonometric method.
  • the distance from the image sensor according to which the relationship between different distances and different operational commands can be defined. And corresponding to the distance information, the input signal, and the correspondence between the recognition result and the operation instruction, the corresponding operation instruction is generated.
  • the triangulation method is a commonly used measurement method in the field of optical ranging.
  • the method is as follows: by calculating the position of the center of gravity of the region and the relative angle and spacing of the known laser emitting device and the image sensor, the distance of the target distance image sensor can be estimated.
  • the position of the center of gravity of the coordinate, z is the measured distance. From the formula, the measurement distance is only related to the position of the center of gravity in the column direction, and is independent of the number of rows.
  • the image sensor receives the reflected light in different directions, receiving the plurality of reflected light response signals, and calculating according to the reflected light response signal by using a trigonometric method a distance between the image sensor and a different slice of the user's limb;
  • the corresponding operation instruction is generated by combining the gesture information, the distance information, the input signal, and the correspondence between the recognition result and the operation instruction.
  • the above solution is an optical three-dimensional ranging technology, which is different from the two-dimensional ranging: the laser emitter can perform laser emission at different angles through the rotating shaft, so that the image sensor collects reflected light in different directions.
  • the triangulation method can be used to measure the three-dimensional distance of different sections of the limb, and the three-dimensional data of different sections can be superimposed in three-dimensional space to complete the three-dimensional modeling. As shown in Figure 5.
  • the image sensor receives different reflected beams, images on the sensor panel, generates reflected light response signals, and obtains three-dimensional image reconstruction according to different reflected beams, thereby obtaining more and more Precise body information.
  • the feature image is identified by using the limb image, specifically:
  • a correspondence between a color block and a limb feature is established; for example, one color block represents the user's thumb, and another color block represents the user's index finger or the like.
  • the limb features corresponding to the color blocks are determined, and the recognition result is output.
  • recognition parameters can be added to speed up recognition. For example, when a user applies a nail polish of a particular color, or wears a glove of a particular color (or a different color block), the processor will position and track the particular color to determine the RGB value for that particular color. And according to the correspondence between the RGB value and the user's limb features, the limb features represented by the color block are determined, thereby identifying the user's limb features more quickly and efficiently.
  • the processor is further configured to: when the user operates the glove equipped with the sensor chip, receive the sensing signal sent by the glove;
  • the user's specific operation finger/gesture and the like can be determined more quickly and conveniently according to the sensing signal of the glove.
  • different sensing chips such as NFC near field communication chips
  • the input signal of the button can be detected according to the glove sensing signal.
  • the recognition result is more accurate and the robustness is higher.
  • the corresponding operation instruction is determined by means of image acquisition and button press detection, which solves the problem that the prior art human-machine interaction signal input mode is single and inefficient.
  • the technical solution provided by the present invention which hand/foot can be used more carefully for the user, which finger is used, which gesture is used to press the button, and different body information presses the same button.
  • the response signals generated are different, and different limb information can be combined with different buttons to define a large number of shortcut operations. That is, the present invention defines a completely new way of interaction that enables fast operation of operational instructions. Compared with the prior art, the invention improves the signal input efficiency, enriches the signal input mode, and improves the user experience.
  • An embodiment of the present invention provides a method for inputting a signal. As shown in FIG. 6, the method includes:
  • the signal input device captures a movement path of the user's limb, and collects the limb image, where the user limb includes the user's limbs;
  • S203 Receive an input signal after the user presses the one or more buttons.
  • buttons and sensors refer to the example described in Embodiment 1, and the details are not described herein.
  • steps S202 and S203 are not limited in sequence, and the button press signal may be received first, and then the limb image may be recognized, or vice versa, and the final processing result of the embodiment of the present invention is not affected.
  • the feature image of the limb image is identified in S202, which may be:
  • Positioning the hand to identify a positional relationship between the hand position and the one or more buttons for example, the hand position may be directly above the button, or may be on the left or right side of the button;
  • the method for identifying the positioning hand can be implemented by a conventional image processing method, for example, an image processing algorithm using binarization and contour recognition to obtain the shape of the hand in an image and the position in the photo, due to the image
  • the sensor can be fixed for periodic shooting in the same place, so the background of the photo (other than the hand can be defined as the background) is constant, the only change is the position of the hand in the photo, so it can be different A change in the hand in the photo to locate the hand and identify the movement of the hand.
  • the hand shape is identified to confirm that the hand is a left hand or a right hand, for example, when the hand position is above the button, The shape of the opponent is recognized to distinguish whether the user is pressing the left hand or the right hand, or has pressed the button, and in combination with the signal that the button is pressed, a different operation command response is performed when the user presses the left or right hand.
  • the operation command can be the output of a text ⁇ sound ⁇ image, or it can be a command of a certain program, which can be customized by the user or preset by the signal input device. For example, when the user presses the button with the left hand, a piece of text is output, and when the user presses the right hand, a piece of sound is output.
  • the signal input device may preset a correspondence table. In the correspondence table, different fingers and different buttons may be arranged and combined, and the output operation instructions are different.
  • the index finger and the thumb, buttons A and B can form 7 different states, respectively, no pressing, single index finger pressing A button, single index finger pressing B button, single thumb pressing A button, single thumb pressing B button, index finger pressing
  • the A button presses the B button
  • the thumb presses the A button corresponding to 6 different operation commands (the A and B buttons are not pressed without an operation command), and each operation command can be defined or preset in advance.
  • the thumb of the left hand and the thumb of the right hand press the same button, and the operational response can also be different. That is, the left and right hands, the index finger and the thumb, and the buttons A and B can also be combined into a more complex correspondence table to output more complex operational responses.
  • the embodiment of the present invention can also distinguish the hand movement trajectory, as mentioned above, moving from top to bottom to above the button, moving from bottom to top. Above the button, move from obliquely up to obliquely down to the top of the button, etc., the operation instructions corresponding to different moving directions can also be different.
  • the gesture of the hand is recognized; in addition to the above-mentioned left and right hand recognition, finger recognition, and hand movement trajectory recognition, the embodiment of the present invention can also implement gesture recognition.
  • Gesture recognition can be similar to Apple's multi-touch technology, such as kneading (zoom out/magnify instructions), multi-finger rotation (picture rotation instructions) and other interactive methods.
  • the present invention does not need to capture a multi-point moving track on the touch screen, and can capture the multi-frame picture to identify the hand shape and shape change of the multi-frame picture, thereby determining the current user. Gesture.
  • the recognition gesture technology is a comprehensive technology that combines finger recognition + finger movement trajectory recognition.
  • the gesture recognition implementation can be identified according to the existing machine learning algorithm, and will not be described here.
  • the processor is configured to generate an operation instruction corresponding to the combination of the result and the input signal, and specifically:
  • a corresponding operation instruction is generated.
  • the typical button such as the button on the dance mat
  • the typical button can position the foot through the image sensor to distinguish whether the user is stepping on a button by the left or right foot.
  • it can also be moved according to the footstep.
  • the trajectory determines the movement trajectory of the footstep when the button is currently pressed, for example, from top to bottom, or from left to right, or from obliquely to obliquely, and the operation instructions are different in different directions.
  • the new interaction method defined by the present invention is widely used.
  • the game controller usually only has a few buttons, and different gestures/finger combinations, pressing different buttons, will bring different game characters to the shortcut.
  • the key the game has high playability and good user experience; in the field of education, when the user selects different fingers/gestures to click/press/touch different buttons, different teaching contents or teaching effects can be triggered, for example, drawing, the user uses the index finger Applying on the LCD screen and applying it on the LCD screen with your thumb, the color and thickness of the lines drawn can be different.
  • the embodiment of the present invention further includes: acquiring a strength of the user pressing the one or more buttons, and generating the corresponding operation according to the pressing force, an input signal, and a correspondence between the recognition result and an operation instruction. instruction.
  • the force channels of different levels can be divided according to different force thresholds, and each level of force can correspond to different operation instructions. Similar to Apple's 3D-touch.
  • the embodiment of the present invention further includes: collecting a user fingerprint and identifying a user identity; acquiring user identity identification information, combining the user identity identification information, an input signal, and a correspondence between the recognition result and an operation instruction, generating a location Corresponding operation instruction; when the user presses, the fingerprint is automatically identified, thereby determining which user is in operation, and combining the user identity information, the input signal, and the correspondence between the recognition result and the operation instruction, The corresponding operation instruction.
  • the corresponding operation instruction is generated by combining the user identity information, the input signal, and the correspondence between the recognition result and the operation instruction.
  • the face recognition can be applied to the voting aspect, such as an election, an entertainment program, or an application scenario such as a public vote in other programs, and the current voting system may have malicious malicious investment and missed voting behavior.
  • the voting aspect such as an election, an entertainment program, or an application scenario such as a public vote in other programs
  • the current voting system may have malicious malicious investment and missed voting behavior.
  • face recognition and button press voting it is possible to locate which user has pressed the current button, and the number of votes is accurately matched with the user, which is good for statistics.
  • the embodiment of the present invention further includes: transmitting a laser to the limb through the laser emitter, and receiving the reflected reflected beam, and when receiving the reflected light in the same direction, generating a reflected light response signal according to the reflected light.
  • transmitting a laser to the limb through the laser emitter and receiving the reflected reflected beam, and when receiving the reflected light in the same direction, generating a reflected light response signal according to the reflected light.
  • Responding to the signal using a triangulation method to calculate the distance between the signal input device and all faces of the user's limb;
  • the method uses a two-dimensional ranging technique to measure the distance between the image sensor and the user's limb.
  • the principle of two-dimensional ranging is to send a point beam or a linear beam to the user's limb through a laser, receive the reflected light emitted by the laser to the user's limb through an image sensor (for example, an infrared sensor), and calculate the current moment by using a trigonometric method.
  • the distance from the image sensor according to which the relationship between different distances and different operational commands can be defined. And corresponding to the distance information, the input signal, and the correspondence between the recognition result and the operation instruction, the corresponding operation instruction is generated.
  • the triangulation method is a commonly used measurement method in the field of optical ranging. The method is as follows: by calculating the position of the center of gravity of the region and the relative angle and spacing of the known laser emitting device and the image sensor, the distance of the target distance image sensor can be estimated.
  • the emitted laser light is a linear light beam in different directions and receives reflected light in different directions, receiving the plurality of reflected light response signals, and calculating a signal input device by using a triangulation method according to the reflected light response signal a distance between different cut surfaces of the user's limb;
  • the corresponding operation instruction is generated by combining the gesture information, the distance information, the input signal, and the correspondence between the recognition result and the operation instruction.
  • the above solution is an optical three-dimensional ranging technology, which is different from the two-dimensional ranging: the laser emitter can perform laser emission at different angles through the rotating shaft, so that the image sensor collects reflected light in different directions.
  • the triangulation method can be used to measure the three-dimensional distance of different sections of the limb, and the three-dimensional data of different sections can be superimposed in three-dimensional space to complete the three-dimensional modeling.
  • the image sensor receives different reflected beams, and three-dimensional image reconstruction can be obtained according to different reflected beams, thereby obtaining more accurate limb information.
  • the feature image is identified by using the limb image, specifically:
  • a correspondence between a color block and a limb feature is established; for example, one color block represents the user's thumb, and another color block represents the user's index finger or the like.
  • the limb features corresponding to the color blocks are determined, and the recognition result is output.
  • recognition parameters can be added to speed up recognition. For example, when a user applies a nail polish of a particular color, or wears a glove of a particular color (or a different color block), the processor will position and track the particular color to determine the RGB value for that particular color. And according to the correspondence between the RGB value and the user's limb features, the limb features represented by the color block are determined, thereby identifying the user's limb features more quickly and efficiently.
  • the method further includes: receiving a sensing signal from the glove when the user operates the glove equipped with the sensor chip;
  • the user's specific operation finger/gesture and the like can be determined more quickly and conveniently according to the sensing signal of the glove.
  • different sensing chips such as NFC near field communication chips
  • the input signal of the button can be detected according to the glove sensing signal.
  • the recognition result is more accurate and the robustness is higher.
  • the corresponding operation instruction is determined by means of image acquisition and button press detection, which solves the problem that the prior art human-machine interaction signal input mode is single and inefficient.
  • the technical solution provided by the present invention which hand/foot can be used more carefully for the user, which finger is used, which gesture is used to press the button, and different body information presses the same button.
  • the response signals generated are different, and different limb information can be combined with different buttons to define a large number of shortcut operations. That is, the present invention defines a completely new way of interaction that enables fast operation of operational instructions. Compared with the prior art, the invention improves the signal input efficiency, enriches the signal input mode, and improves the user experience.
  • the size of the sequence number of each process does not mean the order of execution sequence, and the order of execution of each process should be determined by its function and internal logic, and should not be taken by the embodiment of the present application.
  • the implementation process constitutes any qualification.
  • modules and method steps of the various examples described in connection with the embodiments disclosed herein can be implemented in electronic hardware or a combination of computer software and electronic hardware. Whether these functions are performed in hardware or software depends on the specific application and design constraints of the solution. A person skilled in the art can use different methods to implement the described functions for each particular application, but such implementation should not be considered to be beyond the scope of the present application.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Position Input By Displaying (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

L'invention concerne un dispositif d'entrée d'un signal, comprenant : un ou plusieurs boutons; un ou plusieurs capteurs utilisés pour acquérir des images; un ou plusieurs processeurs, ledit ou lesdits processeurs étant utilisés pour : commander le capteur pour capturer la trajectoire de mouvement de membres d'un utilisateur, et acquérir une image des membres, les membres de l'utilisateur comprenant quatre membres de l'utilisateur; effectuer une reconnaissance de caractéristiques sur l'image des membres et acquérir un résultat de post-reconnaissance; recevoir un signal d'entrée après que l'utilisateur a appuyé sur le ou les boutons; et combiner le résultat de la reconnaissance de caractéristiques avec le signal d'entrée, produire une instruction d'opération qui correspond à la combinaison du résultat et du signal d'entrée et fournir celle-ci en sortie. Par conséquent, l'invention concerne un procédé d'entrée d'un signal, qui résout le problème de la technologie existante dans laquelle le mode d'entrée de signaux d'interaction humain-machine est non varié et l'efficacité est faible.
PCT/CN2018/078642 2018-03-09 2018-03-09 Procédé et dispositif d'entrée de signal WO2019169644A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2018/078642 WO2019169644A1 (fr) 2018-03-09 2018-03-09 Procédé et dispositif d'entrée de signal
CN201880091030.4A CN112567319A (zh) 2018-03-09 2018-03-09 一种信号输入的方法及装置

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/078642 WO2019169644A1 (fr) 2018-03-09 2018-03-09 Procédé et dispositif d'entrée de signal

Publications (1)

Publication Number Publication Date
WO2019169644A1 true WO2019169644A1 (fr) 2019-09-12

Family

ID=67846832

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/078642 WO2019169644A1 (fr) 2018-03-09 2018-03-09 Procédé et dispositif d'entrée de signal

Country Status (2)

Country Link
CN (1) CN112567319A (fr)
WO (1) WO2019169644A1 (fr)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102902354A (zh) * 2012-08-20 2013-01-30 华为终端有限公司 一种终端操作方法及终端
CN103052928A (zh) * 2010-08-04 2013-04-17 惠普发展公司,有限责任合伙企业 能使多显示输入实现的系统和方法
CN103176594A (zh) * 2011-12-23 2013-06-26 联想(北京)有限公司 一种文本操作方法及系统
US20130215038A1 (en) * 2012-02-17 2013-08-22 Rukman Senanayake Adaptable actuated input device with integrated proximity detection
CN104899494A (zh) * 2015-05-29 2015-09-09 努比亚技术有限公司 基于多功能按键的操作控制方法及移动终端
CN105353873A (zh) * 2015-11-02 2016-02-24 深圳奥比中光科技有限公司 基于三维显示的手势操控方法和系统
CN106227336A (zh) * 2016-07-15 2016-12-14 深圳奥比中光科技有限公司 体感映射的建立方法以及建立装置

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6614422B1 (en) * 1999-11-04 2003-09-02 Canesta, Inc. Method and apparatus for entering data using a virtual input device
US20130257734A1 (en) * 2012-03-30 2013-10-03 Stefan J. Marti Use of a sensor to enable touch and type modes for hands of a user via a keyboard

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103052928A (zh) * 2010-08-04 2013-04-17 惠普发展公司,有限责任合伙企业 能使多显示输入实现的系统和方法
CN103176594A (zh) * 2011-12-23 2013-06-26 联想(北京)有限公司 一种文本操作方法及系统
US20130215038A1 (en) * 2012-02-17 2013-08-22 Rukman Senanayake Adaptable actuated input device with integrated proximity detection
CN102902354A (zh) * 2012-08-20 2013-01-30 华为终端有限公司 一种终端操作方法及终端
CN104899494A (zh) * 2015-05-29 2015-09-09 努比亚技术有限公司 基于多功能按键的操作控制方法及移动终端
CN105353873A (zh) * 2015-11-02 2016-02-24 深圳奥比中光科技有限公司 基于三维显示的手势操控方法和系统
CN106227336A (zh) * 2016-07-15 2016-12-14 深圳奥比中光科技有限公司 体感映射的建立方法以及建立装置

Also Published As

Publication number Publication date
CN112567319A (zh) 2021-03-26

Similar Documents

Publication Publication Date Title
US9600078B2 (en) Method and system enabling natural user interface gestures with an electronic system
JP6348211B2 (ja) コンピュータ装置の遠隔制御
US11009961B2 (en) Gesture recognition devices and methods
US9760214B2 (en) Method and apparatus for data entry input
US10209881B2 (en) Extending the free fingers typing technology and introducing the finger taps language technology
US9274551B2 (en) Method and apparatus for data entry input
US8593402B2 (en) Spatial-input-based cursor projection systems and methods
US8180114B2 (en) Gesture recognition interface system with vertical display
TW201814438A (zh) 基於虛擬實境場景的輸入方法及裝置
KR20100106203A (ko) 멀티 텔레포인터, 가상 객체 표시 장치, 및 가상 객체 제어 방법
CN101901106A (zh) 用于数据输入的方法及装置
US8948493B2 (en) Method and electronic device for object recognition, and method for acquiring depth information of an object
KR20120068253A (ko) 사용자 인터페이스의 반응 제공 방법 및 장치
TW201135517A (en) Cursor control device, display device and portable electronic device
CN108027648A (zh) 一种可穿戴设备的手势输入方法及可穿戴设备
US20130229348A1 (en) Driving method of virtual mouse
TW201439813A (zh) 顯示設備及其控制系統和方法
Grady et al. PressureVision++: Estimating Fingertip Pressure from Diverse RGB Images
WO2019169644A1 (fr) Procédé et dispositif d'entrée de signal
KR101860138B1 (ko) 오브젝트 생성 및 오브젝트의 변환이 가능한 동작 인식 센서를 이용한 3차원 입력 장치
KR101506197B1 (ko) 양손을 이용한 동작인식 입력방법
Annabel et al. Design and Development of Multimodal Virtual Mouse
CN108021238A (zh) 新概念盲打键盘
JP2017211739A (ja) ユーザインターフェース装置およびユーザインターフェースプログラム
Bhowmik 39.2: invited paper: natural and intuitive user interfaces: technologies and applications

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18908395

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 17/12/2020)

122 Ep: pct application non-entry in european phase

Ref document number: 18908395

Country of ref document: EP

Kind code of ref document: A1