CN112567319A - Signal input method and device - Google Patents

Signal input method and device Download PDF

Info

Publication number
CN112567319A
CN112567319A CN201880091030.4A CN201880091030A CN112567319A CN 112567319 A CN112567319 A CN 112567319A CN 201880091030 A CN201880091030 A CN 201880091030A CN 112567319 A CN112567319 A CN 112567319A
Authority
CN
China
Prior art keywords
user
operation instruction
limb
hand
input signal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201880091030.4A
Other languages
Chinese (zh)
Inventor
宋卿
葛凯麟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Bile Smart Technology Beijing Co ltd
Original Assignee
Bile Smart Technology Beijing Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Bile Smart Technology Beijing Co ltd filed Critical Bile Smart Technology Beijing Co ltd
Publication of CN112567319A publication Critical patent/CN112567319A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/02Input arrangements using manually operated switches, e.g. using keyboards or dials

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)
  • Position Input By Displaying (AREA)

Abstract

The invention discloses a signal input device, comprising: one or more keys; one or more sensors for acquiring images; one or more processors configured to: controlling the sensor to capture a moving track of a user limb, and acquiring the limb image, wherein the user limb comprises a user limb; performing feature recognition on the limb image to obtain a recognized result; receiving an input signal after the user presses the one or more keys; and combining the result after the characteristic identification with the input signal, generating an operation instruction corresponding to the combination of the result and the input signal, and outputting the operation instruction. Correspondingly, the invention discloses a signal input method, which solves the problems of single man-machine interaction signal input mode and low efficiency in the prior art.

Description

Signal input method and device Technical Field
The invention belongs to the technical field of information, and particularly relates to a signal input method and device.
Background
In the field of traditional human-computer interaction, the input modes of signals by a user mainly include touch, key and voice control modes, for example, for a traditional numeric keyboard, the user needs to manually press keys to type, or a touch screen presses virtual keys, and for an input device such as a dance mat, the user needs to press corresponding keys on the dance mat by feet.
However, although the traditional human-computer interaction mode is simple enough, the user can complete the signal input by using four limbs or voice, for some quick operations, the user needs to perform the human-computer interaction more than once, especially in the field of intelligent terminals, the number of keys is small, but the operation is complex, and often the user needs to perform multiple operations to find the desired function. Therefore, a more efficient signal input method is urgently needed to solve the problems of single signal input method and low efficiency in the current human-computer interaction.
Disclosure of Invention
The invention provides a signal input method and a signal input device, which solve the problems of single man-machine interaction signal input mode and low efficiency in the prior art.
In order to achieve the above object, the present invention provides a signal input apparatus, comprising:
one or more keys;
one or more sensors for acquiring images;
one or more processors configured to:
controlling the sensor to capture a moving track of a user limb, and acquiring the limb image, wherein the user limb comprises a user limb;
performing feature recognition on the limb image to obtain a recognized result;
receiving an input signal after the user presses the one or more keys;
and combining the result after the characteristic identification with the input signal, generating an operation instruction corresponding to the combination of the result and the input signal, and outputting the operation instruction.
Optionally, if the limb is a hand of a user, the processor is configured to perform feature recognition on the limb image, and the feature recognition includes:
locating the hand position, and identifying the position relation between the hand position and the one or more keys;
identifying the hand shape when a hand position is above the one or more keys, confirming that the hand is left or right handed, and/or,
identifying a particular finger above the one or more keys, and/or,
recognizing the hand movement trajectory, and/or,
recognizing a gesture of the hand;
the generating of the operation instruction corresponding to the combination of the result and the input signal comprises:
establishing the corresponding relation between the identified hand position and the position relation of one or more keys, the hand shape and the input signal and an operation instruction;
and generating a corresponding operation instruction according to the corresponding relation.
Optionally, the signal input device further comprises: the pressure sensor is used for acquiring the force of pressing the one or more keys by the user;
the processor is further configured to:
and acquiring the pressing degree acquired by the pressure sensor, and generating the corresponding operation instruction according to the corresponding relation among the pressing degree, the input signal, the identification result and the operation instruction.
Optionally, the signal input device further comprises a fingerprint sensor for collecting a fingerprint of the user and identifying the identity of the user;
the processor is further configured to:
acquiring user identity identification information, and generating a corresponding operation instruction by combining the user identity identification information, an input signal and the corresponding relation between the identification result and the operation instruction;
or the like, or, alternatively,
the image sensor is further configured to: collecting face information;
the processor is further configured to:
identifying the user identity according to the collected face information;
and generating the corresponding operation instruction by combining the user identity identification information, the input signal and the corresponding relation between the identification result and the operation instruction.
Optionally, the signal input device further includes a laser emitter for continuously emitting laser, an included angle is formed between the laser emitter and the image sensor, the image sensor is a laser sensor, and when the laser emitter emits laser to the user limb, the image sensor receives reflected laser reflected back to generate one or more reflected light response signals;
the processor is further configured to:
when the image sensor receives reflected light in the same direction, receiving the reflected light response signal, and calculating the distance between the image sensor and a tangent plane of the user limb by using a trigonometry according to the reflected light response signal;
generating the corresponding operation instruction by combining the distance information, the input signal and the corresponding relation between the identification result and the operation instruction; or the like, or, alternatively,
when the laser transmitter transmits linear light beams in different directions and the image sensor receives reflected light in different directions, the reflected light response signals are received, and the distance between the image sensor and different sections of the user limb is calculated by using a trigonometry according to the reflected light response signals;
performing three-dimensional modeling on the user limb in a three-dimensional space with the image sensor as an origin by using the calculated distance between the image sensor and different tangent planes of the user limb;
performing gesture reconstruction on the three-dimensional modeling information, and recognizing current gesture information and a distance between the current gesture information and the image sensor;
and generating the corresponding operation instruction by combining the gesture information, the distance information, the input signal and the corresponding relation between the recognition result and the operation instruction.
Optionally, the performing feature recognition on the limb image further includes:
establishing a corresponding relation between the color blocks and the limb characteristics;
detecting a specific color block on a user limb;
and determining the limb characteristics corresponding to the color blocks according to the RGB values of the detected color blocks, and outputting an identification result.
Optionally, the processor is further configured to: when a user wears the glove with the sensing chip to operate, receiving a sensing signal sent by the glove;
and generating and outputting an operation instruction corresponding to the combination of the induction signal, the recognition result and the input signal based on the induction signal, the feature recognition result of the limb image and the input signal.
The embodiment of the invention also provides a signal input method, which comprises the following steps:
the signal input device captures the movement track of the user limb and collects the limb image, wherein the user limb comprises the four limbs of the user;
performing feature recognition on the limb image to obtain a recognized result;
receiving an input signal after the user presses the one or more keys;
and combining the result after the characteristic identification with the input signal, generating an operation instruction corresponding to the combination of the result and the input signal, and outputting the operation instruction.
Optionally, if the limb is a hand of a user, performing feature recognition on the limb image, including:
locating the hand position, and identifying the position relation between the hand position and the one or more keys;
identifying the hand shape when a hand position is above the one or more keys, confirming that the hand is left or right handed, and/or,
identifying a particular finger above the one or more keys, and/or,
recognizing the hand movement trajectory, and/or,
recognizing a gesture of the hand;
the generating of the operation instruction corresponding to the combination of the result and the input signal comprises:
establishing the corresponding relation between the identified hand position and the position relation of one or more keys, the hand shape and the input signal and an operation instruction;
and generating a corresponding operation instruction according to the corresponding relation.
Optionally, the method further comprises:
and acquiring the pressing strength of the key pressed by the user, and generating the corresponding operation instruction according to the corresponding relation among the pressing strength, the input signal, the identification result and the operation instruction.
Optionally, the method further comprises:
and carrying out face recognition or fingerprint recognition to obtain user identity identification information, and generating the corresponding operation instruction by combining the user identity identification information, the input signal and the corresponding relation between the recognition result and the operation instruction.
Optionally, the method further comprises:
emitting laser to the user limb, receiving the reflected light response signal when receiving reflected light in the same direction, and calculating the distance between the signal input device and a tangent plane of the user limb by using a trigonometry according to the reflected light response signal;
generating the corresponding operation instruction by combining the distance information, the input signal and the corresponding relation between the identification result and the operation instruction; or the like, or, alternatively,
when the emitted laser is linear beams in different directions and reflected light in different directions is received, the multiple reflected light response signals are received, and the distance between the signal input device and different sections of the user limb is calculated by using a trigonometry method according to the reflected light response signals;
performing three-dimensional modeling on the user limb in a three-dimensional space with the image sensor as an origin by using the calculated distance between the image sensor and different tangent planes of the user limb;
performing gesture reconstruction on the three-dimensional modeling information, and recognizing current gesture information and a distance between the current gesture information and the image sensor;
and generating the corresponding operation instruction by combining the gesture information, the distance information, the input signal and the corresponding relation between the recognition result and the operation instruction.
Optionally, the performing feature recognition on the limb image further includes:
establishing a corresponding relation between the color blocks and the limb characteristics;
detecting a specific color block on a user limb;
and determining the limb characteristics corresponding to the color blocks according to the RGB values of the detected color blocks, and outputting an identification result.
Optionally, the method further comprises:
when a user wears the glove with the sensing chip to operate, receiving a sensing signal sent by the glove;
and generating and outputting an operation instruction corresponding to the combination of the induction signal, the recognition result and the input signal based on the induction signal, the feature recognition result of the limb image and the input signal.
The method and the system of the embodiment of the invention have the following advantages:
in the embodiment of the invention, the signal input device synchronously or asynchronously acquires the user limb information and the key input signal and outputs the corresponding operation instruction according to the identified corresponding relation among the user limb information, the key input signal and the operation instruction. By adopting the technical scheme provided by the invention, the pressing operation of the key can be performed by more finely adopting which hand/foot, which finger and which gesture are adopted by the user, and the like, the response signals generated when different limb information presses the same key are different, and different limb information and different keys can be combined for use, so that a large number of quick operation modes can be defined. Namely, the invention defines a brand-new interaction mode and can realize the quick operation of the operation instruction. Compared with the prior art, the invention improves the signal input efficiency, enriches the signal input modes and improves the user experience.
Drawings
FIG. 1 is a schematic diagram of a signal input device according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of identifying left and right hand pressed keys in an embodiment of the present invention;
FIG. 3 is a diagram illustrating an embodiment of the present invention for identifying a specific finger pressed key;
FIG. 4 is a schematic diagram of gesture recognition according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of three-dimensional modeling after two-dimensional ranging in an embodiment of the invention;
fig. 6 is a flow chart of a signal input method in an embodiment of the invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. In addition, the technical features involved in the embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
To achieve the above object, as shown in fig. 1, the present invention provides a signal input device 11, which includes:
one or more keys 101, one or more sensors 102 for capturing images, one or more processors 103, the one or more processors 103 for:
controlling the sensor 102 to capture a moving track of a user limb, and acquiring the limb image, wherein the user limb comprises a user limb; i.e. including both hands or feet of the user.
Performing feature recognition on the limb image to obtain a recognized result;
receiving an input signal after the user presses the one or more keys 101;
and combining the result after the characteristic identification with the input signal, generating an operation instruction corresponding to the combination of the result and the input signal, and outputting the operation instruction.
The keys 101 may be physical keys or virtual keys, and the number of the keys is not limited, and may be, for example, one or more keys on a conventional numeric keypad, a single key, or one or more virtual keys displayed on a touch screen.
It should be noted that the key may be a whole or a part of the touch screen or the display screen, and the user touches the key no matter where position on the display screen the user presses. For example, in conventional drawing software, a user performs finger drawing on a drawing display area in a touch screen, and at this time, the user is determined to have pressed a drawing key, and position information of a touch point after the user has pressed the key is acquired.
Sensor 102 may be a visible or non-visible image sensor, such as a CCD or CMOS image sensor, or an infrared/ultraviolet sensor for receiving infrared/ultraviolet light in embodiments of the present invention.
When the limb is a hand of a user, the step of performing feature recognition on the limb image by the processor 103 may specifically be:
locating the position of the hand, and identifying the position relation between the hand position and the one or more keys; for example, the hand position may be directly above the key, or may be on the left or right side of the key; the hand positioning recognition method can be realized by adopting a traditional image processing method, for example, an image processing algorithm of binarization and contour recognition is adopted to obtain the shape of the hand and the position of the hand in a picture in one image, because an image sensor can be fixed and periodically shot in the same place, the background (the other parts except the hand can be defined as the background) of the picture is not changed, and the only change is the position of the hand in the picture, therefore, the hand positioning and the hand recognition moving track can be positioned according to the change of the hand in different pictures.
When the hand position is above the one or more keys, the hand shape is identified, whether the hand is left hand or right hand is confirmed, for example, when the hand position is above the key, the hand shape can be identified, whether the user is left hand or right hand ready to press or has pressed the key is distinguished, and different operation command responses are carried out when the user presses the left hand or the right hand in combination with the signal that the key is pressed. The operation command can be a segment of text, sound and image output, or a command of a certain program, can be defined by a user, or can be preset through a signal input device. For example, when the user presses a key with the left hand, a text segment is output, and when the user presses the key with the right hand, a sound segment is output. Fig. 2 is a schematic diagram for identifying left and right hands, and as shown in fig. 2, when a user is ready to press a key 101 on the keyboard 100, the image sensor 102 captures a hand in real time, distinguishing whether the user was left-handed or right-handed when or before pressing the key.
And/or the presence of a gas in the gas,
the specific fingers above the one or more keys are identified, that is, not only the left and right hands of the user are distinguished, but also the fingers of the user can be distinguished, for example, the thumb and the forefinger press the same key, and the operation instructions can be different. The same key and different fingers correspond to an operation instruction in advance. Alternatively, the signal input device may be preset with a corresponding table in which different fingers and different keys may be arranged and combined, and the output operation instructions are different. For example, the index finger and the thumb, and the keys a and B, can form 7 different states, respectively, no pressing, the index finger pressing the a key alone, the index finger pressing the B key alone, the thumb pressing the a key alone, the thumb pressing the B key alone, the index finger pressing the a key and the thumb pressing the B key together, and the index finger pressing the B key and the thumb pressing the a key together, corresponding to 6 different operation instructions (no operation instruction when the a and B keys are not pressed), each operation instruction can be defined or preset by itself. In addition, the thumb of the left hand and the thumb of the right hand press the same key, and the operation response can be different. Namely, the left hand, the right hand, the index finger, the thumb and the keys A and B can be combined into a more complex corresponding relation table to output more complex operation responses. For example, fig. 3 is a specific example of the user typing on the keyboard, and the image sensor 102 captures the hand motion trajectory and hand image in real time during the typing process of the keyboard 100, so as to distinguish the fingers used when the user presses the keyboard, output different response commands, and display the different response commands on the display 105.
And/or the presence of a gas in the gas,
the hand movement trajectory is recognized, and besides the left hand, the right hand and the fingers, the hand movement trajectory can be distinguished in the embodiment of the present invention, such as moving from top to bottom to above the key, moving from bottom to top to above the key, moving from oblique top to oblique bottom to above the key, and the like, and the operation instructions corresponding to different movement directions may also be different.
And/or the presence of a gas in the gas,
recognizing a gesture of the hand; in addition to the left-right hand recognition, the finger recognition and the hand movement track recognition, the embodiment of the invention can also realize gesture recognition. Gesture recognition may be similar to apple's multi-touch technology, implementing interactive means such as pinch-in (zoom-out/zoom-in instructions, as shown in fig. 4), multi-finger rotation (picture rotation instructions), and so on. However, different from the multi-touch of the apple, the multi-touch gesture recognition method and the multi-touch gesture recognition device do not need to capture the moving track of multiple points on the touch screen, and can recognize the hand shape and form change of multiple frames of pictures by capturing the multiple frames of pictures, so that the current gesture of the user is determined. For example, when it is detected that the hand of the user is located above a certain key and the user makes a pinch gesture before or after pressing the key, the corresponding operation instruction may be determined according to the pinch gesture + key (physical or virtual key) pressing. The gesture recognition technology is a comprehensive technology combining finger recognition and finger movement track recognition, and the gesture recognition implementation mode can be realized according to the existing machine learning algorithm, which is not described in detail herein.
The processor is configured to generate an operation instruction corresponding to the combination of the result and the input signal, and specifically may be:
establishing the corresponding relation between the identified hand position and the position relation of one or more keys, the hand shape and the input signal and an operation instruction;
and generating a corresponding operation instruction according to the corresponding relation.
Similarly, when the limb is a foot, a typical key, such as a key on a dance mat, can locate the position of the foot through the image sensor, and distinguish whether the user steps on a certain key with the left foot or the right foot.
The novel interactive mode defined by the invention has wide application, for example, in the field of games, a game handle is usually only provided with a plurality of keys, and different game character shortcut keys can be brought by pressing different keys through different gesture/finger combinations, so that the game has high playability and good user experience; in the field of education, when a user selects different fingers/gestures to click/press/touch different keys, different teaching contents or teaching effects can be triggered, for example, in the aspect of drawing, the user uses the forefinger to paint on a liquid crystal screen and uses the thumb to paint on the liquid crystal screen, and the colors and the thicknesses of drawn lines can be different.
Optionally, the signal input device further comprises: the pressure sensor is used for acquiring the force of pressing the one or more keys by the user;
the processor is further configured to:
and acquiring the pressing degree acquired by the pressure sensor, and generating the corresponding operation instruction according to the corresponding relation among the pressing degree, the input signal, the identification result and the operation instruction.
At present, application-level pressure sensors are already popularized in the market, and the embodiment of the invention can be further internally provided with the pressure sensor (for example, the pressure sensor is arranged in a key) and used for collecting the force of a user pressing the case, the case is divided into force channels with different levels of low, medium and high according to different force channel thresholds, and each level of force channel can correspond to different operation instructions. Similar to 3D-touch of apple. In the embodiment of the invention, a corresponding relation table is creatively provided for the pressure degree, the input signal, the identification result and the operation instruction which are acquired by the pressure sensor, and the corresponding operation instruction is output according to the corresponding relation according to a plurality of acquired parameters.
Optionally, the signal input device further comprises a fingerprint sensor for collecting a fingerprint of the user and identifying the identity of the user;
the processor is further configured to:
acquiring user identity identification information, and generating a corresponding operation instruction by combining the user identity identification information, an input signal and the corresponding relation between the identification result and the operation instruction; similar to the pressure sensor, the invention can be added with a pressure sensor and/or a fingerprint sensor, the fingerprint sensor can also be arranged in the key, when the user presses, the fingerprint is automatically identified, thereby determining which user is operating specifically, and the corresponding operating instruction is generated by combining the user identification information, the input signal and the corresponding relationship between the identification result and the operating instruction.
Or the like, or, alternatively,
the image sensor is further configured to: collecting face information;
the processor is further configured to:
identifying the user identity according to the collected face information;
and generating the corresponding operation instruction by combining the user identity identification information, the input signal and the corresponding relation between the identification result and the operation instruction.
Similar to the former, the difference is that the latter realizes user identity acquisition by means of face recognition through the image sensor 102. Face recognition belongs to the prior art, and the specific implementation mode is not described in a repeated manner.
For example, in the embodiment of the present invention, the face recognition may be applied to voting, such as voting, entertainment programs, or mass voting in other programs, and the like, and the current voting system may have behaviors of malicious miscasting and missed casting of the user, which causes great difficulty in statistics.
Optionally, the signal input device further includes a laser emitter for continuously emitting laser, an included angle is formed between the laser emitter and the image sensor, the image sensor is a laser sensor, and when the laser emitter emits laser to the user limb, the image sensor receives reflected laser reflected back to generate one or more reflected light response signals; the laser transmitter may emit a dot matrix light or a linear beam, for example, the dot matrix light may be expanded into one or more linear beams by a built-in beam expander. The linear beam is preferred in the embodiments of the present invention because it collects more data and is more accurate in distance measurement than the lattice beam.
The processor is further configured to:
when the image sensor receives reflected light in the same direction, receiving the reflected light response signal, and calculating the distance between the image sensor and a tangent plane of the user limb by using a trigonometry according to the reflected light response signal;
generating the corresponding operation instruction by combining the distance information, the input signal and the corresponding relation between the identification result and the operation instruction;
the method adopts a two-dimensional distance measurement technology to realize the distance measurement between the image sensor and the limb of the user. The principle of two-dimensional distance measurement is to send a point beam or a linear beam to the limbs of a user through laser, receive reflected light emitted by the laser to the limbs of the user through an image sensor (such as an infrared sensor), calculate the distance between the limbs and the image sensor at the current moment by utilizing a trigonometry, and define the relationship between different distances and different operation instructions according to the distance. And generating the corresponding operation instruction by combining the distance information, the input signal and the corresponding relation between the identification result and the operation instruction. The triangulation method is a commonly used measuring method in the field of optical ranging, and comprises the following steps: by calculating the position of the center of gravity of the area and the known relative angles and distances between the laser emitting device and the image sensor, the distance of the target from the image sensor can be calculated. The basic measurement formula of the trigonometry is z ═ b × f/x; wherein b represents the distance between the laser emitting device and the image sensor, f is the focal length of the lens used by the image sensor, x is the barycentric position of the column coordinate of the calculated reflected light projected on the image sensor, and z is the measured distance, and the measured distance is only related to the position of the barycentric in the column direction and is not related to the number of lines, so that the photosensitive area array can be mainly arranged in the column direction, and only one line or a very narrow number of lines can be used in the line direction; in addition, the error formula of the measurement is e 1/(b f/(n z) +1), where n represents the error of the gravity center extraction, and it can be seen that the measurement error e is inversely proportional to b, f and is proportional to n, z. Therefore, in the case where b, f, and n are constant, it is necessary to select a relatively telephoto lens to reduce errors of different distances.
Or the like, or, alternatively,
when the laser transmitter transmits linear light beams in different directions and the image sensor receives reflected light in different directions, the reflected light response signals are received, and the distance between the image sensor and different sections of the user limb is calculated by using a trigonometry according to the reflected light response signals;
performing three-dimensional modeling on the user limb in a three-dimensional space with the image sensor as an origin by using the calculated distance between the image sensor and different tangent planes of the user limb;
performing gesture reconstruction on the three-dimensional modeling information, and recognizing current gesture information and a distance between the current gesture information and the image sensor;
and generating the corresponding operation instruction by combining the gesture information, the distance information, the input signal and the corresponding relation between the recognition result and the operation instruction.
The scheme is an optical three-dimensional distance measurement technology, and is different from two-dimensional distance measurement: the laser transmitter can transmit laser at different angles through the rotating shaft, so that the image sensor collects reflected light in different directions. The three-dimensional distances of different sections of the limb can be measured by using a trigonometry method, and three-dimensional data of different sections are superposed in a three-dimensional space, so that three-dimensional modeling can be completed. As shown in fig. 5. When different linear light beams are emitted to the surface of the limb, the image sensor receives different reflected light beams, images are formed on the sensor panel, reflected light response signals are generated, three-dimensional image reconstruction can be obtained according to the different reflected light beams, and therefore more accurate limb information is obtained.
Optionally, the feature recognition of the limb image may specifically be:
establishing a corresponding relation between the color blocks and the limb characteristics; for example, one color block represents the user's thumb, another color block represents the user's index finger, and so on.
Detecting a specific color block on a user limb;
and determining the limb characteristics corresponding to the color blocks according to the RGB values of the detected color blocks, and outputting an identification result.
In addition to recognizing a conventional hand image, a recognition parameter (color) can be added to accelerate recognition efficiency. For example, when the user is painted with nail polish of a specific color or wears gloves of a specific color (or different color blocks), the processor locates and tracks the specific color to determine the RGB value of the specific color, and determines the limb characteristics represented by the color blocks according to the corresponding relationship between the RGB value and the limb characteristics of the user, so as to more quickly and efficiently identify the limb characteristics of the user.
Optionally, the processor is further configured to: when a user wears the glove with the sensing chip to operate, receiving a sensing signal sent by the glove;
and generating and outputting an operation instruction corresponding to the combination of the induction signal, the recognition result and the input signal based on the induction signal, the feature recognition result of the limb image and the input signal.
Similarly, when the user wears the glove with the sensing chip, the specific operating finger/gesture and the like of the user can be determined more quickly and conveniently according to the sensing signal of the glove. For example, different sensing chips (e.g., NFC near field communication chips) are mounted in different fingers of the glove, and when a certain finger of a user presses the key, the user can detect an input signal of the key, and can detect a specific pressed finger according to the glove sensing signal, and in combination with a limb feature recognition result, the three determine a specific finger/gesture, etc. currently pressing the key, so as to output corresponding content. The recognition result is more accurate, and the robustness is higher.
In the embodiment of the invention, the corresponding operation instruction is determined by the mode of image acquisition and key press detection, so that the problems of single man-machine interaction signal input mode and low efficiency in the prior art are solved. By adopting the technical scheme provided by the invention, the pressing operation of the key can be performed by more finely adopting which hand/foot, which finger and which gesture are adopted by the user, and the like, the response signals generated when different limb information presses the same key are different, and different limb information and different keys can be combined for use, so that a large number of quick operation modes can be defined. Namely, the invention defines a brand-new interaction mode and can realize the quick operation of the operation instruction. Compared with the prior art, the invention improves the signal input efficiency, enriches the signal input modes and improves the user experience.
Example two
An embodiment of the present invention provides a method for inputting a signal, as shown in fig. 6, the method includes:
s201, capturing a moving track of a user limb by a signal input device, and collecting the limb image, wherein the user limb comprises a user limb;
s202, performing feature recognition on the limb image to obtain a recognized result;
s203, receiving an input signal after the user presses the one or more keys;
and S204, combining the result after the characteristic identification with the input signal, generating an operation instruction corresponding to the combination of the result and the input signal, and outputting the operation instruction.
The keys and sensors can be referred to the example described in the first embodiment, and will not be described here.
It should be noted that the execution sequence of steps S202 and S203 is not limited in order, and the key pressing signal may be received first, and then the limb image is identified, or vice versa, and the final processing result of the embodiment of the present invention is not affected.
When the limb is a hand of the user, the feature recognition of the limb image in S202 may specifically be:
locating the position of the hand, and identifying the position relation between the hand position and the one or more keys; for example, the hand position may be directly above the key, or may be on the left or right side of the key; the hand positioning recognition method can be realized by adopting a traditional image processing method, for example, an image processing algorithm of binarization and contour recognition is adopted to obtain the shape of the hand and the position of the hand in a picture in one image, because an image sensor can be fixed and periodically shot in the same place, the background (the other parts except the hand can be defined as the background) of the picture is not changed, and the only change is the position of the hand in the picture, therefore, the hand positioning and the hand recognition moving track can be positioned according to the change of the hand in different pictures.
When the hand position is above the one or more keys, the hand shape is identified, whether the hand is left hand or right hand is confirmed, for example, when the hand position is above the key, the hand shape can be identified, whether the user is left hand or right hand ready to press or has pressed the key is distinguished, and different operation command responses are carried out when the user presses the left hand or the right hand in combination with the signal that the key is pressed. The operation command can be a segment of text, sound and image output, or a command of a certain program, can be defined by a user, or can be preset through a signal input device. For example, when the user presses a key with the left hand, a text segment is output, and when the user presses the key with the right hand, a sound segment is output.
And/or the presence of a gas in the gas,
the specific fingers above the one or more keys are identified, that is, not only the left and right hands of the user are distinguished, but also the fingers of the user can be distinguished, for example, the thumb and the forefinger press the same key, and the operation instructions can be different. The same key and different fingers correspond to an operation instruction in advance. Alternatively, the signal input device may be preset with a corresponding table in which different fingers and different keys may be arranged and combined, and the output operation instructions are different. For example, the index finger and the thumb, and the keys a and B, can form 7 different states, respectively, no pressing, pressing the key a with the index finger alone, pressing the key B with the index finger alone, pressing the key a with the thumb alone, pressing the key B with the thumb with the index finger, pressing the key a with the index finger with the thumb, and pressing the key a with the thumb with the index finger, corresponding to 6 different operation instructions (no operation instruction when the keys a and B are not pressed), each of which can be defined or preset by itself. In addition, the thumb of the left hand and the thumb of the right hand press the same key, and the operation response can be different. Namely, the left hand, the right hand, the index finger, the thumb and the keys A and B can be combined into a more complex corresponding relation table to output more complex operation responses.
And/or the presence of a gas in the gas,
the hand movement trajectory is recognized, and besides the left hand, the right hand and the fingers, the hand movement trajectory can be distinguished in the embodiment of the present invention, such as moving from top to bottom to above the key, moving from bottom to top to above the key, moving from oblique top to oblique bottom to above the key, and the like, and the operation instructions corresponding to different movement directions may also be different.
And/or the presence of a gas in the gas,
recognizing a gesture of the hand; in addition to the left-right hand recognition, the finger recognition and the hand movement track recognition, the embodiment of the invention can also realize gesture recognition. Gesture recognition may be similar to apple's multi-touch technology, implementing interactive means such as pinch-in (zoom-out/zoom-in instructions), multi-finger rotation (picture rotation instructions), etc. However, different from the multi-touch of the apple, the multi-touch gesture recognition method and the multi-touch gesture recognition device do not need to capture the moving track of multiple points on the touch screen, and can recognize the hand shape and form change of multiple frames of pictures by capturing the multiple frames of pictures, so that the current gesture of the user is determined. For example, when it is detected that the hand of the user is located above a certain key and the user makes a pinch gesture before or after pressing the key, the corresponding operation instruction may be determined according to the pinch gesture + key (physical or virtual key) pressing. The gesture recognition technology is a comprehensive technology combining finger recognition and finger movement track recognition, and the gesture recognition implementation mode can be realized according to the existing machine learning algorithm, which is not described in detail herein.
The processor is configured to generate an operation instruction corresponding to the combination of the result and the input signal, and specifically may be:
establishing the corresponding relation between the identified hand position and the position relation of one or more keys, the hand shape and the input signal and an operation instruction;
and generating a corresponding operation instruction according to the corresponding relation.
Similarly, when the limb is a foot, a typical key, such as a key on a dance mat, can locate the position of the foot through the image sensor, and distinguish whether the user steps on a certain key with the left foot or the right foot.
The novel interactive mode defined by the invention has wide application, for example, in the field of games, a gamepad usually has only a few keys, and different game character shortcut keys can be brought by pressing different keys through different gesture/finger combinations, so that the game has high playability and good user experience; in the field of education, when a user selects different fingers/gestures to click/press/touch different keys, different teaching contents or teaching effects can be triggered, for example, in the aspect of drawing, the user uses the forefinger to paint on a liquid crystal screen and uses the thumb to paint on the liquid crystal screen, and the colors and the thicknesses of drawn lines can be different.
Optionally, the embodiment of the present invention further includes: and acquiring the force of pressing the one or more keys by the user, and generating the corresponding operation instruction according to the pressing force, the input signal and the corresponding relation between the identification result and the operation instruction.
In the embodiment of the invention, the lanes with different levels of low, medium and high levels can be distinguished according to different lane threshold values, and each level of lane can correspond to different operation instructions. Similar to 3D-touch of apple. In the embodiment of the invention, a corresponding relation table is creatively provided for the pressure degree, the input signal, the identification result and the operation instruction which are acquired by the pressure sensor, and the corresponding operation instruction is output according to the corresponding relation according to a plurality of acquired parameters.
Optionally, the embodiment of the present invention further includes: collecting a user fingerprint and identifying the identity of the user; acquiring user identity identification information, and generating a corresponding operation instruction by combining the user identity identification information, an input signal and the corresponding relation between the identification result and the operation instruction; when a user presses, the fingerprint is automatically identified so as to determine which user is operating, and the corresponding operating instruction is generated by combining the user identification information, the input signal and the corresponding relationship between the identification result and the operating instruction.
Or the like, or, alternatively,
acquiring face information through an image sensor;
identifying the user identity according to the collected face information;
and generating the corresponding operation instruction by combining the user identity identification information, the input signal and the corresponding relation between the identification result and the operation instruction.
Similar to the former, the difference is that the latter realizes user identity acquisition by means of face recognition through an image sensor. Face recognition belongs to the prior art, and the specific implementation mode is not described in a repeated manner.
For example, in the embodiment of the present invention, the face recognition may be applied to voting, such as voting, entertainment programs, or mass voting in other programs, and the like, and the current voting system may have behaviors of malicious miscasting and missed casting of the user, which causes great difficulty in statistics.
Optionally, the embodiment of the present invention further includes: the method comprises the steps that laser is emitted to the limb through a laser emitter, reflected light beams are received, when the reflected light beams in the same direction are received, a reflected light response signal is generated, and the distance between a signal input device and a tangent plane of the limb of a user is calculated by a trigonometry according to the reflected light response signal;
generating the corresponding operation instruction by combining the distance information, the input signal and the corresponding relation between the identification result and the operation instruction;
the method adopts a two-dimensional distance measurement technology to realize the distance measurement between the image sensor and the limb of the user. The principle of two-dimensional distance measurement is to send a point beam or a linear beam to the limbs of a user through laser, receive reflected light emitted by the laser to the limbs of the user through an image sensor (such as an infrared sensor), calculate the distance between the limbs and the image sensor at the current moment by utilizing a trigonometry, and define the relationship between different distances and different operation instructions according to the distance. And generating the corresponding operation instruction by combining the distance information, the input signal and the corresponding relation between the identification result and the operation instruction. The triangulation method is a commonly used measuring method in the field of optical ranging, and comprises the following steps: by calculating the position of the center of gravity of the area and the known relative angles and distances between the laser emitting device and the image sensor, the distance of the target from the image sensor can be calculated.
Or the like, or, alternatively,
when the emitted laser is linear beams in different directions and reflected light in different directions is received, the multiple reflected light response signals are received, and the distance between the signal input device and different sections of the user limb is calculated by using a trigonometry method according to the reflected light response signals;
performing three-dimensional modeling on the user limb in a three-dimensional space with the image sensor as an origin by using the calculated distance between the image sensor and different tangent planes of the user limb;
performing gesture reconstruction on the three-dimensional modeling information, and recognizing current gesture information and a distance between the current gesture information and the image sensor;
and generating the corresponding operation instruction by combining the gesture information, the distance information, the input signal and the corresponding relation between the recognition result and the operation instruction.
The scheme is an optical three-dimensional distance measurement technology, and is different from two-dimensional distance measurement: the laser transmitter can transmit laser at different angles through the rotating shaft, so that the image sensor collects reflected light in different directions. The three-dimensional distances of different sections of the limb can be measured by using a trigonometry method, and three-dimensional data of different sections are superposed in a three-dimensional space, so that three-dimensional modeling can be completed. When different linear light beams are emitted to the surface of the limb, the image sensor receives different reflected light beams, and three-dimensional image reconstruction can be obtained according to the different reflected light beams, so that more accurate limb information is obtained.
Optionally, the feature recognition of the limb image may specifically be:
establishing a corresponding relation between the color blocks and the limb characteristics; for example, one color block represents the user's thumb, another color block represents the user's index finger, and so on.
Detecting a specific color block on a user limb;
and determining the limb characteristics corresponding to the color blocks according to the RGB values of the detected color blocks, and outputting an identification result.
In addition to recognizing a conventional hand image, a recognition parameter (color) can be added to accelerate recognition efficiency. For example, when the user is painted with nail polish of a specific color or wears gloves of a specific color (or different color blocks), the processor locates and tracks the specific color to determine the RGB value of the specific color, and determines the limb characteristics represented by the color blocks according to the corresponding relationship between the RGB value and the limb characteristics of the user, so as to more quickly and efficiently identify the limb characteristics of the user.
Optionally, the method further comprises: when a user wears the glove with the sensing chip to operate, receiving a sensing signal sent by the glove;
and generating and outputting an operation instruction corresponding to the combination of the induction signal, the recognition result and the input signal based on the induction signal, the feature recognition result of the limb image and the input signal.
Similarly, when the user wears the glove with the sensing chip, the specific operating finger/gesture and the like of the user can be determined more quickly and conveniently according to the sensing signal of the glove. For example, different sensing chips (e.g., NFC near field communication chips) are mounted in different fingers of the glove, and when a certain finger of a user presses the key, the user can detect an input signal of the key, and can detect a specific pressed finger according to the glove sensing signal, and in combination with a limb feature recognition result, the three determine a specific finger/gesture, etc. currently pressing the key, so as to output corresponding content. The recognition result is more accurate, and the robustness is higher.
In the embodiment of the invention, the corresponding operation instruction is determined by the mode of image acquisition and key press detection, so that the problems of single man-machine interaction signal input mode and low efficiency in the prior art are solved. By adopting the technical scheme provided by the invention, the pressing operation of the key can be performed by more finely adopting which hand/foot, which finger and which gesture are adopted by the user, and the like, the response signals generated when different limb information presses the same key are different, and different limb information and different keys can be combined for use, so that a large number of quick operation modes can be defined. Namely, the invention defines a brand-new interaction mode and can realize the quick operation of the operation instruction. Compared with the prior art, the invention improves the signal input efficiency, enriches the signal input modes and improves the user experience.
It should be understood that, in the various embodiments of the present application, the size of the serial number of each process does not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Those of ordinary skill in the art will appreciate that the various illustrative modules and method steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the system, the apparatus and the module described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
All parts of the specification are described in a progressive mode, the same and similar parts of all embodiments can be referred to each other, and each embodiment is mainly introduced to be different from other embodiments. In particular, as to the apparatus and system embodiments, since they are substantially similar to the method embodiments, the description is relatively simple and reference may be made to the description of the method embodiments in relevant places.
Finally, it is to be noted that: the above description is only a preferred embodiment of the present disclosure, and is not intended to limit the scope of the present disclosure. It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the scope of the application. To the extent that such modifications and variations of the present application fall within the scope of the claims and their equivalents, they are intended to be included within the scope of the present application.

Claims (10)

  1. An apparatus for signal input, comprising:
    one or more keys;
    one or more sensors for acquiring images;
    one or more processors configured to:
    controlling the sensor to capture a moving track of a user limb, and acquiring the limb image, wherein the user limb comprises a user limb;
    performing feature recognition on the limb image to obtain a recognized result;
    receiving an input signal after the user presses the one or more keys;
    and combining the result after the characteristic identification with the input signal, generating an operation instruction corresponding to the combination of the result and the input signal, and outputting the operation instruction.
  2. The apparatus of claim 1, wherein the limb is a hand of a user, and the processor is configured to perform feature recognition on the limb image, comprising:
    locating the hand position, and identifying the position relation between the hand position and the one or more keys;
    identifying the hand shape when a hand position is above the one or more keys, confirming that the hand is left or right handed, and/or,
    identifying a particular finger above the one or more keys, and/or,
    recognizing the hand movement trajectory, and/or,
    recognizing a gesture of the hand;
    the generating of the operation instruction corresponding to the combination of the result and the input signal comprises:
    establishing the corresponding relation between the identified hand position and the position relation of one or more keys, the hand shape and the input signal and an operation instruction;
    and generating a corresponding operation instruction according to the corresponding relation.
  3. The apparatus of claim 1, wherein the signal input means further comprises: the pressure sensor is used for acquiring the force of pressing the one or more keys by the user;
    the processor is further configured to:
    and acquiring the pressing degree acquired by the pressure sensor, and generating the corresponding operation instruction according to the corresponding relation among the pressing degree, the input signal, the identification result and the operation instruction.
  4. The apparatus of claim 1, wherein the signal input device further comprises a fingerprint sensor for collecting a fingerprint of the user and identifying an identity of the user;
    the processor is further configured to:
    acquiring user identity identification information, and generating a corresponding operation instruction by combining the user identity identification information, an input signal and the corresponding relation between the identification result and the operation instruction;
    or the like, or, alternatively,
    the image sensor is further configured to: collecting face information;
    the processor is further configured to:
    identifying the user identity according to the collected face information;
    and generating the corresponding operation instruction by combining the user identity identification information, the input signal and the corresponding relation between the identification result and the operation instruction.
  5. The device of any one of claims 1 to 4, wherein the signal input device further comprises a laser emitter for continuously emitting laser light, the laser emitter is at an angle to the image sensor, and the image sensor is a laser sensor, and when the laser emitter emits laser light to the user's limb, the image sensor receives reflected laser light reflected back to generate one or more reflected light response signals;
    the processor is further configured to:
    when the image sensor receives reflected light in the same direction, receiving the reflected light response signal, and calculating the distance between the image sensor and a tangent plane of the user limb by using a trigonometry according to the reflected light response signal;
    generating the corresponding operation instruction by combining the distance information, the input signal and the corresponding relation between the identification result and the operation instruction; or the like, or, alternatively,
    when the laser transmitter transmits linear light beams in different directions and the image sensor receives reflected light in different directions, the reflected light response signals are received, and the distance between the image sensor and different sections of the user limb is calculated by using a trigonometry according to the reflected light response signals;
    performing three-dimensional modeling on the user limb in a three-dimensional space with the image sensor as an origin by using the calculated distance between the image sensor and different tangent planes of the user limb;
    performing gesture reconstruction on the three-dimensional modeling information, and recognizing current gesture information and a distance between the current gesture information and the image sensor;
    and generating the corresponding operation instruction by combining the gesture information, the distance information, the input signal and the corresponding relation between the recognition result and the operation instruction.
  6. A method of signal input, comprising:
    the signal input device captures the movement track of the user limb and collects the limb image, wherein the user limb comprises the four limbs of the user;
    performing feature recognition on the limb image to obtain a recognized result;
    receiving an input signal after the user presses the one or more keys;
    and combining the result after the characteristic identification with the input signal, generating an operation instruction corresponding to the combination of the result and the input signal, and outputting the operation instruction.
  7. The method of claim 6, wherein the limb is a hand of a user, and the feature recognition of the limb image comprises:
    locating the hand position, and identifying the position relation between the hand position and the one or more keys;
    identifying the hand shape when a hand position is above the one or more keys, confirming that the hand is left or right handed, and/or,
    identifying a particular finger above the one or more keys, and/or,
    recognizing the hand movement trajectory, and/or,
    recognizing a gesture of the hand;
    the generating of the operation instruction corresponding to the combination of the result and the input signal comprises:
    establishing the corresponding relation between the identified hand position and the position relation of one or more keys, the hand shape and the input signal and an operation instruction;
    and generating a corresponding operation instruction according to the corresponding relation.
  8. The method of claim 6, further comprising:
    and acquiring the pressing strength of the key pressed by the user, and generating the corresponding operation instruction according to the corresponding relation among the pressing strength, the input signal, the identification result and the operation instruction.
  9. The method of claim 6, further comprising:
    and carrying out face recognition or fingerprint recognition to obtain user identity identification information, and generating the corresponding operation instruction by combining the user identity identification information, the input signal and the corresponding relation between the recognition result and the operation instruction.
  10. The method according to any one of claims 6-9, further comprising:
    emitting laser to the user limb, receiving the reflected light response signal when receiving reflected light in the same direction, and calculating the distance between the signal input device and a tangent plane of the user limb by using a trigonometry according to the reflected light response signal;
    generating the corresponding operation instruction by combining the distance information, the input signal and the corresponding relation between the identification result and the operation instruction; or the like, or, alternatively,
    when the emitted laser is linear beams in different directions and reflected light in different directions is received, the multiple reflected light response signals are received, and the distance between the signal input device and different sections of the user limb is calculated by using a trigonometry method according to the reflected light response signals;
    performing three-dimensional modeling on the user limb in a three-dimensional space with the image sensor as an origin by using the calculated distance between the image sensor and different tangent planes of the user limb;
    performing gesture reconstruction on the three-dimensional modeling information, and recognizing current gesture information and a distance between the current gesture information and the image sensor;
    and generating the corresponding operation instruction by combining the gesture information, the distance information, the input signal and the corresponding relation between the recognition result and the operation instruction.
CN201880091030.4A 2018-03-09 2018-03-09 Signal input method and device Pending CN112567319A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/078642 WO2019169644A1 (en) 2018-03-09 2018-03-09 Method and device for inputting signal

Publications (1)

Publication Number Publication Date
CN112567319A true CN112567319A (en) 2021-03-26

Family

ID=67846832

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201880091030.4A Pending CN112567319A (en) 2018-03-09 2018-03-09 Signal input method and device

Country Status (2)

Country Link
CN (1) CN112567319A (en)
WO (1) WO2019169644A1 (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6614422B1 (en) * 1999-11-04 2003-09-02 Canesta, Inc. Method and apparatus for entering data using a virtual input device
CN102902354A (en) * 2012-08-20 2013-01-30 华为终端有限公司 Terminal operation method and terminal
CN103052928A (en) * 2010-08-04 2013-04-17 惠普发展公司,有限责任合伙企业 System and method for enabling multi-display input
CN103176594A (en) * 2011-12-23 2013-06-26 联想(北京)有限公司 Method and system for text operation
US20130257734A1 (en) * 2012-03-30 2013-10-03 Stefan J. Marti Use of a sensor to enable touch and type modes for hands of a user via a keyboard
CN104899494A (en) * 2015-05-29 2015-09-09 努比亚技术有限公司 Multifunctional key based operation control method and mobile terminal
CN106227336A (en) * 2016-07-15 2016-12-14 深圳奥比中光科技有限公司 Body-sensing map method for building up and set up device

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130215038A1 (en) * 2012-02-17 2013-08-22 Rukman Senanayake Adaptable actuated input device with integrated proximity detection
CN105353873B (en) * 2015-11-02 2019-03-15 深圳奥比中光科技有限公司 Gesture control method and system based on Three-dimensional Display

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6614422B1 (en) * 1999-11-04 2003-09-02 Canesta, Inc. Method and apparatus for entering data using a virtual input device
CN103052928A (en) * 2010-08-04 2013-04-17 惠普发展公司,有限责任合伙企业 System and method for enabling multi-display input
CN103176594A (en) * 2011-12-23 2013-06-26 联想(北京)有限公司 Method and system for text operation
US20130257734A1 (en) * 2012-03-30 2013-10-03 Stefan J. Marti Use of a sensor to enable touch and type modes for hands of a user via a keyboard
CN102902354A (en) * 2012-08-20 2013-01-30 华为终端有限公司 Terminal operation method and terminal
CN104899494A (en) * 2015-05-29 2015-09-09 努比亚技术有限公司 Multifunctional key based operation control method and mobile terminal
CN106227336A (en) * 2016-07-15 2016-12-14 深圳奥比中光科技有限公司 Body-sensing map method for building up and set up device

Also Published As

Publication number Publication date
WO2019169644A1 (en) 2019-09-12

Similar Documents

Publication Publication Date Title
US11009961B2 (en) Gesture recognition devices and methods
US9927881B2 (en) Hand tracker for device with display
US9600078B2 (en) Method and system enabling natural user interface gestures with an electronic system
US8166421B2 (en) Three-dimensional user interface
KR100630806B1 (en) Command input method using motion recognition device
EP2717120B1 (en) Apparatus, methods and computer program products providing finger-based and hand-based gesture commands for portable electronic device applications
TWI471815B (en) Gesture recognition device and method
CN105824431A (en) Information input device and method
WO2006091753A2 (en) Method and apparatus for data entry input
CN102033702A (en) Image display device and display control method thereof
CN103809733A (en) Man-machine interactive system and method
US10078374B2 (en) Method and system enabling control of different digital devices using gesture or motion control
KR101169583B1 (en) Virture mouse driving method
CN112567319A (en) Signal input method and device
JPH04257014A (en) Input device
KR20120047746A (en) Virture mouse driving method
KR101506197B1 (en) A gesture recognition input method using two hands
JP6523509B1 (en) Game program, method, and information processing apparatus
CN108021238A (en) New concept touch system keyboard
WO2020078223A1 (en) Input device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination