CN112363622A - Character input method, character input device, electronic equipment and storage medium - Google Patents

Character input method, character input device, electronic equipment and storage medium Download PDF

Info

Publication number
CN112363622A
CN112363622A CN202011270920.7A CN202011270920A CN112363622A CN 112363622 A CN112363622 A CN 112363622A CN 202011270920 A CN202011270920 A CN 202011270920A CN 112363622 A CN112363622 A CN 112363622A
Authority
CN
China
Prior art keywords
gesture
vibration signal
target
signal
initial
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011270920.7A
Other languages
Chinese (zh)
Inventor
陈文强
陈林
约翰·斯坦科维奇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chen Wenqiang
Suzhou Waibing Intelligent Technology Co.,Ltd.
Original Assignee
Shenzhen Zhenke Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Zhenke Intelligent Technology Co ltd filed Critical Shenzhen Zhenke Intelligent Technology Co ltd
Priority to CN202011270920.7A priority Critical patent/CN112363622A/en
Publication of CN112363622A publication Critical patent/CN112363622A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0346Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/038Control and interface arrangements therefor, e.g. drivers or device-embedded control circuitry
    • G06F3/0383Signal control means within the pointing device
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/12Classification; Matching

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the application discloses a character input method, a character input device, electronic equipment and a storage medium. The method comprises the following steps: determining an initial gesture vibration signal generated in the process of writing characters in air through gesture execution; extracting a target gesture vibration signal for effectively representing information of the volley written characters from the initial gesture vibration signal; and determining the gesture type when characters are written in high altitude by performing vibration feature recognition on the target gesture vibration signal so as to execute character input operation based on the gesture type. By adopting the scheme, the user can accurately recognize the character information written by the user gesture without greatly moving the hand gesture to write the character, the complexity of remote input is simplified, the input convenience is improved, meanwhile, the purpose-made equipment is not required to be customized to recognize fine fingers to write in the air, the equipment development cost and the realization cost are reduced, the fingers to write in the air in a fine manner are realized, and the human-computer interaction experience is enhanced.

Description

Character input method, character input device, electronic equipment and storage medium
Technical Field
The embodiment of the application relates to the technical field of human-computer interaction, in particular to a character input method and device, electronic equipment and a storage medium.
Background
With the continuous development of the technology, the hand is in high position to write characters for inputting, and the method can be applied to remote input operation of various intelligent devices, such as intelligent glasses, intelligent televisions and the like. However, the camera-based volley handwriting cannot achieve fine finger writing recognition, and the gesture-based movement trajectory tracking mode also requires the device to move for tracking, so that the user must write characters with large motions. Therefore, how to refine the character writing is necessary to write the character in a high-speed manner.
Disclosure of Invention
The embodiment of the application provides a character input method, a character input device, electronic equipment and a storage medium, so that the purpose of finely writing and inputting characters can be achieved without customizing equipment.
In a first aspect, an embodiment of the present application provides a character input method applied to an electronic device, where the method includes:
determining an initial gesture vibration signal generated in the process of writing characters in air through gesture execution;
extracting a target gesture vibration signal for effectively representing information of the volley written characters from the initial gesture vibration signal;
and determining the gesture type when characters are written in high altitude by performing vibration feature recognition on the target gesture vibration signal so as to execute character input operation based on the gesture type.
In a second aspect, an embodiment of the present application further provides a character input device configured on an electronic device, where the character input device includes:
the initial signal determining module is used for determining an initial gesture vibration signal generated in the process of executing characters written in air through gestures;
the target signal extraction module is used for extracting a target gesture vibration signal for effectively representing information of characters written in air from the initial gesture vibration signal;
and the gesture character input module is used for identifying vibration characteristics of the target gesture vibration signal, determining a gesture type when characters are written in the air and executing character input operation based on the gesture type.
In a third aspect, an embodiment of the present application further provides an electronic device, including:
one or more processors;
storage means for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement a character input method as provided in any of the embodiments of the present application.
In a fourth aspect, embodiments of the present application further provide a computer-readable storage medium on which a computer program is stored, which when executed by a processor, implements the character input method as provided in any of the embodiments of the present application.
The embodiment of the application provides a character input method, when a user needs to carry out remote input aiming at various intelligent electronic devices, an initial gesture vibration signal generated in the process of executing characters in a high altitude mode through gestures by the user can be determined through the electronic devices, a target gesture vibration signal used for effectively representing information of the characters in the high altitude mode is extracted from the initial gesture vibration signal, then the gesture type in the process of writing the characters in the high altitude mode is determined by recognizing the vibration characteristic of the target gesture vibration signal, and the operation of executing character input based on the gesture type is achieved. By adopting the scheme, the user can accurately recognize the character information written by the user gesture without greatly moving the hand gesture to write the character, the complexity of remote input is simplified, the input convenience is improved, meanwhile, the purpose-made equipment is not required to be customized to recognize fine fingers to write in the air, the equipment development cost and the realization cost are reduced, the fingers to write in the air in a fine manner are realized, and the human-computer interaction experience is enhanced.
The foregoing description is only an overview of the technical solutions of the present application, and the present application can be implemented in accordance with the content of the description so as to make the technical means of the present application more clearly understood, and the detailed description of the present application will be given below in order to make the above and other objects, features, and advantages of the present application more clearly understandable.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the application. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
FIG. 1 is a flow chart of a character input method provided in an embodiment of the present invention;
FIG. 2 is a schematic diagram of a gesture vibration signal during a process of writing characters over the air according to an embodiment of the present invention;
FIG. 3 is a flow chart of another character input method provided in embodiments of the present invention;
FIG. 4 is a diagram illustrating a process for detecting a gesture signal according to an embodiment of the present invention;
FIG. 5 is a partial diagram of an end point detection based on frame energy provided in an embodiment of the present invention;
FIG. 6 is a flow chart of yet another method of character input provided in embodiments of the present invention;
FIG. 7 is a diagram of a video feature after gravity removal in accordance with an embodiment of the present invention;
fig. 8 is a block diagram showing a structure of a character input apparatus provided in an embodiment of the present invention;
fig. 9 is a schematic structural diagram of an electronic device provided in an embodiment of the present application.
Detailed Description
The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the disclosure and are not limiting of the application. It should be further noted that, for the convenience of description, only some of the structures related to the present application are shown in the drawings, not all of the structures.
Before discussing exemplary embodiments in more detail, it should be noted that some exemplary embodiments are described as processes or methods depicted as flowcharts. Although a flowchart may describe the operations (or steps) as a sequential process, many of the operations (or steps) can be performed in parallel, concurrently or simultaneously. In addition, the order of the operations may be re-arranged. The process may be terminated when its operations are completed, but may have additional steps not included in the figure. The processes may correspond to methods, functions, procedures, subroutines, and the like.
Fig. 1 is a flowchart of a character input method provided in an embodiment of the present invention. The embodiment of the application can be suitable for the situation of carrying out remote input on the intelligent electronic equipment. The method can be executed by a character input device, the device can be realized by software and/or hardware, and the device can be configured in an electronic device with network communication function; for example, the terminal device may be a terminal device such as an electronic watch, smart glasses, a smart television, and a mobile phone, or may be a server device. As shown in fig. 1, the character input method provided in the embodiment of the present application may include the following steps:
s110, determining an initial gesture vibration signal generated in the process of writing characters in air through gesture execution.
In this embodiment, a user writes in air through a gesture and can be applied to various intelligent electronic devices, but the current gesture-based writing in air is realized by tracking the gesture, so that various limitations are caused, for example, the user needs to move greatly to write, otherwise, the gesture path trace of the user cannot be accurately tracked. Fig. 2 is a schematic diagram of a gesture vibration signal in the process of writing characters over the air according to an embodiment of the present invention. Referring to fig. 2, a user may write a character in the air through a gesture, and a certain vibration signal is usually generated during the character writing process, so that a vibration signal generated by the vibration of the written character can be collected, which is referred to as an initial gesture vibration signal.
In an alternative of this embodiment, combinations with each of the alternatives of one or more of the embodiments described above are possible. Determining an initial gesture vibration signal generated during the process of writing characters in high latitude through gesture execution may include: and a preset vibration signal sensor is adopted to collect an initial gesture vibration signal generated in the process of writing characters in air through gesture execution.
The vibration signal sensor comprises an accelerometer and a gyroscope, and the vibration signal sensor and the electronic equipment are integrally arranged or separately arranged.
In this embodiment, the vibration signal sensor may capture a vibration signal generated when a user writes characters in air through gestures, but generally does not screen and capture the vibration signal to obtain an initial gesture vibration signal. Meanwhile, the vibration signal sensor can be directly integrated in terminal equipment such as an electronic watch, an electronic bracelet and the like, and because the equipment is usually worn and the wrist is closer to the fingers, the corresponding initial gesture vibration signal can be easily captured when characters are written by the fingers; certainly, the vibration signal sensor can be separated from electronic devices such as smart glasses, smart televisions and mobile phones, and the electronic devices are usually not worn on the wrist and are far away from the fingers, so that the vibration signal sensor is not directly integrated in the electronic devices but is connected with the electronic devices in a wireless mode, the vibration signal sensor is separately arranged near the fingers, and the vibration signals of the gestures are collected and then sent to the electronic devices separated from the electronic devices through the wireless mode.
And S120, extracting a target gesture vibration signal for effectively representing the information of the volley written characters from the initial gesture vibration signal.
In this embodiment, in the process of performing writing characters in a high-speed manner through gestures, not only the written characters may cause vibration, but also other operations may cause vibration, so that the acquired initial gesture vibration signal may include not only an effective vibration signal caused by writing characters, but also an interference type vibration signal generated by other factors. Therefore, after the initial gesture vibration signal is obtained, a vibration signal capable of reflecting the user gesture writing characters needs to be extracted from the initial gesture vibration signal, and the extracted vibration signal is used as a target gesture vibration signal, so that the gesture type is prevented from being recognized by directly using the initial gesture vibration signal, and the gesture type recognition error caused by interference signals included in the initial gesture vibration signal is avoided.
S130, performing vibration feature recognition on the target gesture vibration signal, determining a gesture type when characters are written in a high altitude, and executing character input operation based on the gesture type.
In this embodiment, different characters have different writing tracks when writing the characters, and thus, the vibration characteristics of the gesture vibration signals generated by writing different characters are different. Based on this principle, it is possible to recognize which specific gesture type is when writing characters by recognizing the vibration characteristics of the respective target gesture vibration signals. Alternatively, a plurality of gestures for writing characters in the air may be obtained in advance to form a gesture set, for example, a gesture for writing ten arabic numerals and twenty-six english letters is obtained to form a numeral gesture set and a letter gesture set. For each gesture in the gesture set, the vibration characteristics corresponding to the gesture can be associated, so that the gesture type matched with the corresponding vibration characteristics in association can be known after the vibration characteristics of the target gesture vibration signal are recognized, and further the written character type can be known.
According to the character input method provided by the embodiment of the application, a user can accurately recognize character information written by the gesture of the user without greatly moving the character to be written by the gesture, the complexity of remote input is simplified, the input convenience is improved, meanwhile, the purpose-made equipment is not needed to be customized to recognize fine fingers to write in the air, the equipment development cost and the realization cost are reduced, fine fingers to write in the air are realized, and the human-computer interaction experience is enhanced.
Fig. 3 is a flow chart of another character input method provided in an embodiment of the present invention. The technical solution of the embodiments of the present application is optimized based on the above embodiments, and the embodiments of the present application may be combined with various alternatives in one or more of the above embodiments. As shown in fig. 3, the character input method provided in the embodiment of the present application may include the following steps:
s310, determining an initial gesture vibration signal generated in the process of writing characters in air through gesture execution.
And S320, performing endpoint detection according to the signal intensity of the initial gesture vibration signal, and determining a target starting endpoint and a target ending endpoint of the gesture vibration signal which can effectively represent the volitional writing character information in the initial gesture vibration signal.
In this embodiment, fig. 4 is a process diagram for detecting a gesture signal according to an embodiment of the present invention. Referring to fig. 4, the initial gesture vibration signal includes an effective gesture vibration signal and an ineffective interference vibration signal, where the ineffective vibration signal may include a vibration signal occurring when a character is being written and vibration signals occurring before and after a character is written, so that a start endpoint and an end endpoint of the gesture vibration signal that can effectively represent information of the volley written character in the initial gesture vibration signal may be identified by performing endpoint detection on the signals, so that the vibration signal between the two endpoints is intercepted as an effective signal from the initial gesture vibration signal as a subsequent identification object, and the vibration signals before and after the character is written are prevented from being doped therein to affect the identification process of the subsequent vibration characteristics.
In an alternative of this embodiment, combinations with each of the alternatives of one or more of the embodiments described above are possible. The method for detecting the endpoint according to the signal intensity of the initial gesture vibration signal and determining the target starting endpoint and the target ending endpoint of the gesture vibration signal which can effectively represent the volitional written character information in the initial gesture vibration signal comprises the following steps of A1-A3:
and A1, detecting the signal intensity of the initial gesture vibration signal according to a preset time interval.
Step A2, if the signal intensity of the initial gesture vibration signal is detected to be greater than a first intensity threshold value, taking a detection point which is greater than the first intensity threshold value as a target starting endpoint; and after a target starting endpoint is detected, continuing to detect the signal intensity of the initial gesture vibration signal.
And step A3, if the signal intensity of the vibration signal of the initial gesture detected in the continuous detection process is less than the second intensity threshold, taking the detected point when the signal intensity is less than the second intensity threshold as the target termination endpoint.
In this embodiment, the first intensity threshold and the second intensity threshold have the same value or the difference between the values is within a preset range. The initial gesture vibration signal may be end-point detected based on a set signal strength threshold, for example, a first strength threshold for detecting a starting end point of the gesture vibration signal and a second strength threshold for detecting an ending end point of the gesture vibration signal are set. Therefore, the signal intensity value of the initial gesture vibration signal can be detected according to the preset time interval, and when the signal intensity begins to be greater than the first intensity threshold value in the detection process, the detection point is considered as the starting endpoint of the effective gesture vibration signal; and when the starting endpoint of the gesture vibration signal is detected, the initial gesture vibration signal is continuously detected until the detected signal intensity is lower than a second intensity threshold value, the detected point is considered to be the termination endpoint of the effective gesture vibration signal, and the gesture vibration signal from the starting endpoint to the termination endpoint is the required vibration signal.
In this embodiment, on the basis of single-threshold endpoint detection of the starting endpoint, two first intensity thresholds, one large and one small, may be set for detection of the starting endpoint, where the detection of the starting endpoint on the initial gesture vibration signal is performed, the initial gesture vibration signal is detected by first passing through the smaller first intensity threshold, and after passing through the smaller first intensity threshold, the initial gesture vibration signal is further detected by using the larger first intensity threshold, and the starting endpoint of the detected signal is determined only when the signal intensity of the detected initial gesture vibration signal is successively greater than the smaller first intensity threshold and the larger first intensity threshold. In the scheme, the influence of the abnormal value can be removed by using the double thresholds, and the signal can be more accurately detected to the starting endpoint by using two thresholds, namely one threshold is larger and the other threshold is smaller.
In another alternative of this embodiment, combinations with each of the alternatives of one or more of the embodiments described above are possible. The method for detecting the endpoint according to the signal intensity of the initial gesture vibration signal and determining the target starting endpoint and the target ending endpoint of the gesture vibration signal which can effectively represent the volitional written character information in the initial gesture vibration signal comprises the following steps of B1-B3:
and step B1, extracting a gesture vibration signal with a preset frame length from the initial gesture vibration signal according to a preset frame moving step length, and calculating the signal energy of the gesture vibration signal with the preset frame length.
Step B2, if the signal energy of the extracted preset frame length gesture vibration signal is detected to be greater than a first energy threshold, taking the extraction point which is greater than the first energy threshold as a target starting endpoint; and after a target starting endpoint is detected, continuously extracting the gesture vibration signal with the preset frame length and calculating a signal energy value.
And step B3, if the signal energy of the gesture vibration signal with the preset frame length detected to be extracted in the continuous extraction process is smaller than a second energy threshold, taking the extraction point when the signal energy is smaller than the second energy threshold as a target termination endpoint.
In this embodiment, the values of the first energy threshold and the second energy threshold are the same or the threshold difference is within a preset range. Above threshold-based endpoint detection schemes, to avoid sudden signal intensity at a certain time being above a given intensity threshold due to errors of the vibration sensor or environmental influences, a gesture vibration signal is misidentified as the starting endpoint. Therefore, in order to detect the gesture vibration signal that is effective when the user writes characters as accurately as possible, the computation amount needs to be reduced, the signal intensity at a single time point is not detected any more, but the intra-frame energy is calculated for data of a time period (for example, data of a frame), the end point detection is performed through the signal energy of a time period, and the influence of an abnormal value on the end point detection can be avoided by considering the signal of a time segment.
In the embodiment, a first energy threshold used for detecting the starting end point of the gesture vibration signal and a second energy threshold used for detecting the ending end point of the gesture vibration signal are set. Therefore, gesture vibration signals with preset frame lengths can be sequentially extracted from the initial gesture vibration signals according to preset frame moving step lengths, and the signal energy of the gesture vibration signals with the preset frame lengths is calculated. When the frame energy is greater than a first energy threshold value in the detection process, the detection point is considered as the starting endpoint of the effective gesture vibration signal; after the starting endpoint of the gesture vibration signal is detected, the initial gesture vibration signal is continuously detected until the frame energy is detected to be lower than a second energy threshold value, the detected point is considered to be the termination endpoint of the effective gesture vibration signal, and the gesture vibration signal from the starting endpoint to the termination endpoint is the required vibration signal.
In this embodiment, fig. 5 is a partial schematic diagram of an endpoint detection based on frame energy provided in an embodiment of the present invention. Referring to fig. 5, by reasonably setting the frame length and the frame shift, the end point detection of the initial gesture vibration signal based on the frame energy can effectively avoid the false detection of the noise signal, for example, the frame length set here is 0.2s, and the frame shift is 0.01 s. Using the data calculated from the frame energy, when it is detected that the frame energy is greater than the threshold (according to the actual signal condition, the threshold is set to 0.03 here), the current position is recorded as b (signal start position), and the detection of the frame energy is continued. When the frame energy is less than the threshold, the current position is recorded as e (signal end position). Fig. 5 shows the actually collected 0-9 ten digital gesture signal segments, and it can be seen that the system can accurately segment the gesture signal.
By adopting the optional mode, after the signal is subjected to frame energy calculation, the noise signal can be better inhibited, the effective signal can be obviously amplified, the frame energy can be calculated in constant operation on a real-time system, and the performance burden caused by excessive operation amount is avoided.
And S330, intercepting effective target gesture vibration signals from the initial gesture vibration signals according to the target starting endpoint and the target ending endpoint.
In an alternative of this embodiment, combinations with each of the alternatives of one or more of the embodiments described above are possible. Intercepting an effective target gesture vibration signal from the initial gesture vibration signal according to the target starting endpoint and the target ending endpoint, wherein the method comprises the following operations:
and if the length between the end points between the two end points is determined to be within the preset end point length threshold range according to the target starting end point and the target ending end point, intercepting the effective target gesture vibration signal from the initial gesture vibration signal according to the target starting end point and the target ending end point.
In the embodiment, in the endpoint detection stage, the slight vibration is also considered as a gesture vibration signal generated by writing characters, which can effectively avoid omission of detection of the gesture vibration signal, but can increase false detection of noise signals. Therefore, although the signal passes through the endpoint detection, the validity of the obtained signal needs to be further judged, and the probability of noise false detection is reduced. And determining whether the length between the target starting end point and the target termination end point is within the preset end point length threshold range or not to determine whether the target starting end point and the target termination end point are effective or not. For example, according to the length l of the signal being e-b (e is the target termination end point, b is the target start end point), it is determined whether the intercepted signal is a valid gesture vibration signal. Because the noise signal is typically short, such as when the finger is occasionally active, or in most cases long, the length of the gesture signal is typically within a time frame. The length requirement is added to the effective gesture signal, the signal length l is required to meet the requirement that l is less than 2.0s and is more than 0.6s, and the signal length l is considered to be the effective gesture vibration signal, so that the probability that the noise signal is mistakenly identified as the gesture vibration signal can be reduced.
In an alternative of this embodiment, combinations with each of the alternatives of one or more of the embodiments described above are possible. Intercepting an effective target gesture vibration signal from the initial gesture vibration signal according to the target starting endpoint and the target ending endpoint, wherein the method comprises the following operations:
and determining the gesture vibration signals from the preset frame length before the target starting end point to the preset length after the target ending end point in the initial gesture vibration signals as target gesture vibration signals.
In this embodiment, after the gesture vibration signal validity is determined, the target gesture vibration signal can be extracted according to the target starting endpoint and the target ending endpoint obtained by endpoint detection. However, since preprocessing of the signal in the subsequent feature extraction step requires that a section of buffer data be reserved before and after the signal, a section of buffer data is required to be intercepted before and after the signal detected at the interception end point in the signal interception step, and the length of the buffer is set to be 0.2 s. And when the gesture signal is intercepted, the length of the gesture signal is fixed to be 2.0 s. The fixed length intercepting method is that according to the initial point position of the signal position, the signal is moved forward 0.2s from the initial point position to be used as the characteristic to extract the signal preprocessing buffer data, and then the signal with the frame length of 2.0s is intercepted backward from the position.
In an alternative of this embodiment, combinations with each of the alternatives of one or more of the embodiments described above are possible. Before determining a target starting endpoint and a target ending endpoint of the gesture vibration signal which can effectively represent the volitionally written character information in the initial gesture vibration signal, the method further comprises the following operations:
and fusing six-axis initial gesture vibration signals of a three-axis accelerometer and a three-axis gyroscope in the vibration sensor to obtain a fused initial gesture vibration signal.
In this embodiment, referring to fig. 5, in order to detect a fine change of a gesture vibration signal more sensitively, data of six axes of the three-axis accelerometer and the three-axis gyroscope are fused before frame energy calculation, and omission of detection of the gesture vibration signal can be effectively avoided by adding the six axes of data energy.
S340, performing vibration feature recognition on the target gesture vibration signal, determining a gesture type when characters are written in air, and executing character input operation based on the gesture type.
According to the character input method provided by the embodiment of the application, the effective target gesture vibration signal is intercepted from the initial gesture vibration signal generated in the process that a user writes characters through gestures, the subsequent vibration characteristic recognition result of an invalid vibration signal image is avoided, so that wrong character input is caused, the user can accurately recognize character information written by the user gestures without greatly moving the character written by the gestures, the complexity of remote input is simplified, the input convenience of the character input method is improved, meanwhile, the purpose-made equipment is not required to be customized to recognize fine fingers and write in the air, the equipment development cost and the implementation cost are reduced, fine fingers and write in the air, and the human-computer interaction experience is enhanced.
On the basis of the embodiments described above, various alternatives of one or more of the embodiments described above can be combined. Before determining a target starting endpoint and a target ending endpoint of the gesture vibration signal which can effectively represent the volitionally written character information in the initial gesture vibration signal, the method further comprises the following operations:
and filtering the initial gesture vibration signal by adopting a preset high-pass filter to obtain the initial gesture vibration signal after filtering.
In this embodiment, before performing endpoint detection on the initial gesture vibration signal, a high-pass filtering process needs to be performed on the initial gesture vibration signal. The high-pass filtering can remove gravity components contained in the acquired accelerometer signal acquisition, and reduce the influence of gravity on signal detection; and, reduce the influence of noise (e.g., the user's arm shakes slightly during signal acquisition). Optionally, by analyzing the frequency domain distribution characteristics of the gesture vibration signal, the initial gesture vibration signal can be selected to be subjected to high-pass 5HZ processing, and the filter selects button-Worth as a prototype.
Fig. 6 is a flowchart of another character input method provided in the embodiment of the present invention. The technical solution of the embodiments of the present application is optimized based on the above embodiments, and the embodiments of the present application may be combined with various alternatives in one or more of the above embodiments. As shown in fig. 6, the character input method provided in the embodiment of the present application may include the following steps:
s610, determining an initial gesture vibration signal generated in the process of writing characters in air through gesture execution.
S620, extracting a target gesture vibration signal for effectively representing the information of the volley written characters from the initial gesture vibration signal.
S630, extracting the time domain characteristics and the frequency domain characteristics of the target gesture vibration signals to obtain vibration signal fusion characteristics fusing the time domain characteristics and the frequency domain characteristics.
In this embodiment, fig. 7 is a schematic diagram of a video feature after removing gravity in a digital manner according to an embodiment of the present invention. Referring to fig. 7, in the final time domain feature extraction, a target gesture vibration signal may be intercepted from an original initial gesture vibration signal, and then a time domain feature and a frequency domain feature are extracted therefrom; or filtering the original initial gesture vibration signal to obtain a filtered initial gesture vibration signal, and further extracting time domain features and frequency domain features from the filtered initial gesture vibration signal.
In this embodiment, referring to fig. 7, it can be seen through performing time-frequency analysis on gesture signals continuously collected from 0 to 9, that the gesture signals are mainly distributed below 25Hz, and distributed below 5Hz, so that low-frequency information needs to be retained, and the change of gravity is also retained without filtering, which is also beneficial to gesture differentiation: the change of the watch gravity under different gestures is also inconsistent. Frequency domain information is added to the features to account for differences in the frequency domain between different gestures. The frequency domain information of the target gesture vibration signal may be represented by the following fourier transform equation (1).
Figure BDA0002777630870000141
In this embodiment, the frequency domain information may be extracted from the time domain information through a Fast Fourier Transform (FFT) algorithm, and the finally selected vibration signal fusion feature includes both the time domain feature information and the frequency domain feature information.
And S640, inputting the vibration signal fusion characteristics of the target gesture vibration signals into a pre-trained gesture classification model, and outputting gesture types when characters are written in a high-altitude mode so as to execute character input operation based on the gesture types.
In this embodiment, in the signal detection stage, endpoint detection is performed on different gesture vibration signals, so that a certain offset is inevitable, the classifier uses time domain information as a feature, and even a slight offset between different signals inevitably increases the classification difficulty of the classifier, thereby reducing the classification accuracy of the classifier. It is not difficult to know in the calculation of the distance that even two identical signals, the euclidean distance or the manhattan distance calculated for the two signals after shifting one of the signals, will calculate a larger value even if the two signals are the same signal at different time delays. Therefore, the offset between signals will certainly have a certain influence on the classification algorithm based on the distance information between signals. To solve this problem, the alignment of the gesture signals can be achieved by calculating the time delay between the two signals using GCC (Generalized Cross-Correlation).
Before inputting the vibration signal fusion features of the target gesture vibration signal into the pre-trained gesture classification model, the method may further include: and carrying out characteristic normalization processing on the vibration signal fusion characteristics of the target gesture vibration signals.
In the embodiment, the feature normalization is dimension normalization of different dimensional features, different optimization degrees of different features on a classification algorithm using distance as measurement similarity due to inconsistent dimensions among different features can be eliminated by normalizing the features, and the classification accuracy of the model is favorably improved; meanwhile, for the neural network, the local optimal coefficients of different characteristics can be in the same order of magnitude by the normalized characteristics, and the convergence rate of target optimization by using a gradient descent algorithm is increased.
In this embodiment, the data normalization algorithm is selected more, and commonly used are 1) min-max normalization and z-score zero-mean normalization. The min-max normalization normalizes the values of a group of data to be in a range of [0,1], and the specific calculation is as shown in formula (2), wherein y (t) is normalized data at the time t in formula (2), and x (t) is normalized data at the time t. The z-score processing method is to subtract the mean value from a group of data, divide the data by the standard deviation, and convert the group of data into data with a mean value of 0 and a variance of 1, and specifically calculate as shown in formula (3), y (t) in formula (3) is normalized data at time t, x (t) is data at time t, μ is the mean value of data x, and σ is the standard deviation of data x.
Figure BDA0002777630870000151
Figure BDA0002777630870000152
In this embodiment, both min-max normalization and z-score normalization are essentially scaling and panning of the data, both of which can be expressed using equation 4. min-max considers only the minimum and maximum values of the data during scaling and translation, and scales the data to a fixed range [0,1 ]. In contrast to min-max, z-score requires the calculation of the mean and variance of the data during scaling and translation, using all data, it is not fixed over the scaled range of the data.
In this embodiment, optionally, the data collected by the gesture vibration signal is acceleration sensor data and gyroscope sensor data, the fluctuation of the data value is relatively unfixed, and the data is not suitable for scaling to a fixed range, so that it is more suitable to select a z-score method to normalize the data.
According to the character input method provided by the embodiment of the application, a user can accurately recognize character information written by the gesture of the user without greatly moving the character to be written by the gesture, the complexity of remote input is simplified, the input convenience is improved, meanwhile, the purpose-made equipment is not needed to be customized to recognize fine fingers to write in the air, the equipment development cost and the realization cost are reduced, fine fingers to write in the air are realized, and the human-computer interaction experience is enhanced.
Fig. 8 is a block diagram of a character input device provided in an embodiment of the present invention. The embodiment of the application can be suitable for the situation of carrying out remote input on the intelligent electronic equipment. The device can be realized by software and/or hardware, and can be configured in an electronic device with a network communication function; for example, the terminal device can be an electronic watch, smart glasses, a smart television, a mobile phone and the like. As shown in fig. 8, the character input device provided in the embodiment of the present application may include the following: an initial signal determination module 810, a target signal extraction module 820, and a gesture character input module 830. Wherein:
an initial signal determination module 810 for determining an initial gesture vibration signal generated in the process of performing characters over the air through gestures;
a target signal extraction module 820, configured to extract a target gesture vibration signal for effectively representing information of characters written in air from the initial gesture vibration signal;
and the gesture character input module 830 is configured to determine a gesture type when characters are written in the air by performing vibration feature recognition on the target gesture vibration signal, so as to perform a character input operation based on the gesture type.
On the basis of the above embodiment, optionally, the initial signal determining module 810 includes:
acquiring an initial gesture vibration signal generated in the process of writing characters in air through gesture execution by adopting a preset vibration signal sensor;
the vibration signal sensor comprises an accelerometer and a gyroscope, and is integrally or separately arranged with the electronic equipment.
On the basis of the foregoing embodiment, optionally, the target signal extraction module 820 includes:
performing endpoint detection according to the signal intensity of the initial gesture vibration signal, and determining a target starting endpoint and a target ending endpoint of the gesture vibration signal which can effectively represent the volitional writing character information in the initial gesture vibration signal;
and intercepting effective target gesture vibration signals from the initial gesture vibration signals according to the target starting endpoint and the target ending endpoint.
On the basis of the foregoing embodiment, optionally, performing endpoint detection according to the signal strength of the initial gesture vibration signal, and determining a target start endpoint and a target end endpoint of the gesture vibration signal, which can effectively represent volley written character information, in the initial gesture vibration signal includes:
detecting the signal intensity of the initial gesture vibration signal according to a preset time interval;
if the signal intensity of the initial gesture vibration signal is detected to be greater than a first intensity threshold value, taking a detection point which is greater than the first intensity threshold value as a target starting endpoint; after a target starting endpoint is detected, continuously detecting the signal intensity of the initial gesture vibration signal;
if the signal intensity of the initial gesture vibration signal is detected to be smaller than a second intensity threshold value in the continuous detection process, taking a detection point when the signal intensity is smaller than the second intensity threshold value as a target termination endpoint;
the values of the first intensity threshold and the second intensity threshold are the same or the threshold difference is within a preset range.
On the basis of the foregoing embodiment, optionally, performing endpoint detection according to the signal strength of the initial gesture vibration signal, and determining a target start endpoint and a target end endpoint of the gesture vibration signal, which can effectively represent volley written character information, in the initial gesture vibration signal includes:
extracting a gesture vibration signal with a preset frame length from the initial gesture vibration signal according to a preset frame moving step length, and calculating the signal energy of the gesture vibration signal with the preset frame length;
if the fact that the signal energy of the extracted preset frame length gesture vibration signal is larger than a first energy threshold is detected, taking an extraction point which is larger than the first energy threshold as a target starting endpoint; after a target starting endpoint is detected, continuously extracting a gesture vibration signal with the preset frame length and calculating a signal energy value;
if the signal energy of the extracted preset frame length gesture vibration signal is detected to be smaller than a second energy threshold value in the continuous extraction process, taking an extraction point which is smaller than the second energy threshold value as a target termination endpoint;
the values of the first energy threshold and the second energy threshold are the same or the threshold difference is within a preset range.
On the basis of the foregoing embodiment, optionally, intercepting a valid target gesture vibration signal from the initial gesture vibration signal according to the target start endpoint and the target end endpoint includes:
and if the length between the end points between the two end points is determined to be within the range of the preset end point length threshold value according to the target starting end point and the target ending end point, intercepting an effective target gesture vibration signal from the initial gesture vibration signal according to the target starting end point and the target ending end point.
On the basis of the foregoing embodiment, optionally intercepting a valid target gesture vibration signal from the initial gesture vibration signal according to the target start endpoint and the target end endpoint includes:
and determining the gesture vibration signals from the preset frame length before the target starting end point to the preset length after the target termination end point in the initial gesture vibration signals as the target gesture vibration signals.
On the basis of the foregoing embodiment, optionally, before determining the target start endpoint and the target end endpoint of the gesture vibration signal that can effectively characterize the volitionally written character information in the initial gesture vibration signal, the method further includes:
and filtering the initial gesture vibration signal by adopting a preset high-pass filter to obtain the initial gesture vibration signal after filtering.
On the basis of the foregoing embodiment, optionally, before determining the target start endpoint and the target end endpoint of the gesture vibration signal that can effectively characterize the volitionally written character information in the initial gesture vibration signal, the method further includes:
and fusing six-axis initial gesture vibration signals of a three-axis accelerometer and a three-axis gyroscope in the vibration sensor to obtain a fused initial gesture vibration signal.
On the basis of the above embodiment, optionally, the gesture character input module 830 includes:
extracting time domain characteristics and frequency domain characteristics of the target gesture vibration signal to obtain vibration signal fusion characteristics fusing the time domain characteristics and the frequency domain characteristics;
and inputting the vibration signal fusion characteristics of the target gesture vibration signals into a pre-trained gesture classification model, and outputting gesture types when characters are written in a high-altitude mode.
On the basis of the foregoing embodiment, optionally, before inputting the vibration signal fusion feature of the target gesture vibration signal into the pre-trained gesture classification model, the method further includes:
and carrying out feature normalization processing on the vibration signal fusion features of the target gesture vibration signals.
The character input device provided in the embodiment of the present application can execute the character input method provided in any embodiment of the present application, and has corresponding functions and advantages for executing the character input method.
Fig. 9 is a schematic structural diagram of an electronic device provided in an embodiment of the present application. As shown in fig. 9, the electronic device provided in the embodiment of the present application includes: one or more processors 910 and storage 920; the processor 910 in the electronic device may be one or more, and one processor 910 is taken as an example in fig. 9; storage 920 is used to store one or more programs; the one or more programs are executed by the one or more processors 910, such that the one or more processors 910 implement a character input method as described in any of the embodiments of the present application.
The electronic device may further include: an input device 930 and an output device 940.
The processor 910, the storage device 920, the input device 930, and the output device 940 in the electronic apparatus may be connected by a bus or other means, and fig. 9 illustrates an example of connection by a bus.
The storage 920 in the electronic device is used as a computer-readable storage medium for storing one or more programs, which may be software programs, computer-executable programs, and modules, such as program instructions/modules corresponding to the character input method provided in the embodiments of the present application. The processor 910 executes various functional applications and data processing of the electronic device by running software programs, instructions and modules stored in the storage 920, that is, implements the application control method in the above method embodiment.
The storage 920 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to use of the electronic device, and the like. Additionally, the storage 920 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some examples, the storage 920 may further include memory located remotely from the processor 910, which may be connected to the device over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The input device 930 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function controls of the electronic apparatus. The output device 940 may include a display device such as a display screen.
And, when the one or more programs included in the electronic device are executed by the one or more processors 910, the programs perform the following operations:
determining an initial gesture vibration signal generated in the process of writing characters in air through gesture execution;
extracting a target gesture vibration signal for effectively representing information of the volley written characters from the initial gesture vibration signal;
and determining the gesture type when characters are written in high altitude by performing vibration feature recognition on the target gesture vibration signal so as to execute character input operation based on the gesture type.
Of course, it will be understood by those skilled in the art that when one or more programs included in the electronic device are executed by the one or more processors 910, the programs may also perform related operations in the character input method provided in any of the embodiments of the present application.
One embodiment of the present application provides a computer-readable storage medium having stored thereon a computer program for performing a character input method when executed by a processor, the method comprising:
determining an initial gesture vibration signal generated in the process of writing characters in air through gesture execution;
extracting a target gesture vibration signal for effectively representing information of the volley written characters from the initial gesture vibration signal;
and determining the gesture type when characters are written in high altitude by performing vibration feature recognition on the target gesture vibration signal so as to execute character input operation based on the gesture type.
Optionally, the program, when executed by the processor, may be further configured to perform a character input method as provided in any of the embodiments of the present application.
The computer storage media of the embodiments of the present application may take any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a Read Only Memory (ROM), an Erasable Programmable Read Only Memory (EPROM), a flash Memory, an optical fiber, a portable CD-ROM, an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. A computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take a variety of forms, including, but not limited to: an electromagnetic signal, an optical signal, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, Radio Frequency (RF), etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
In the description herein, reference to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present application and the technical principles employed. It will be understood by those skilled in the art that the present application is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the application. Therefore, although the present application has been described in more detail with reference to the above embodiments, the present application is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present application, and the scope of the present application is determined by the scope of the appended claims.

Claims (10)

1. A character input method is applied to an electronic device, and the method comprises the following steps:
determining an initial gesture vibration signal generated in the process of writing characters in air through gesture execution;
extracting a target gesture vibration signal for effectively representing information of the volley written characters from the initial gesture vibration signal;
and determining the gesture type when characters are written in high altitude by performing vibration feature recognition on the target gesture vibration signal so as to execute character input operation based on the gesture type.
2. The method of claim 1, wherein extracting, from the initial gesture vibration signal, a target gesture vibration signal effective to characterize volitionally written character information comprises:
performing endpoint detection according to the signal intensity of the initial gesture vibration signal, and determining a target starting endpoint and a target ending endpoint of the gesture vibration signal which can effectively represent the volitional writing character information in the initial gesture vibration signal;
and intercepting effective target gesture vibration signals from the initial gesture vibration signals according to the target starting endpoint and the target ending endpoint.
3. The method according to claim 2, wherein performing endpoint detection according to the signal strength of the initial gesture vibration signal, and determining a target starting endpoint and a target ending endpoint of the gesture vibration signal which can effectively represent the information of the volitionally written characters in the initial gesture vibration signal comprises:
detecting the signal intensity of the initial gesture vibration signal according to a preset time interval;
if the signal intensity of the initial gesture vibration signal is detected to be greater than a first intensity threshold value, taking a detection point which is greater than the first intensity threshold value as a target starting endpoint; after a target starting endpoint is detected, continuously detecting the signal intensity of the initial gesture vibration signal;
if the signal intensity of the initial gesture vibration signal is detected to be smaller than a second intensity threshold value in the continuous detection process, taking a detection point when the signal intensity is smaller than the second intensity threshold value as a target termination endpoint;
the values of the first intensity threshold and the second intensity threshold are the same or the threshold difference is within a preset range.
4. The method according to claim 2, wherein performing endpoint detection according to the signal strength of the initial gesture vibration signal, and determining a target starting endpoint and a target ending endpoint of the gesture vibration signal which can effectively represent the information of the volitionally written characters in the initial gesture vibration signal comprises:
extracting a gesture vibration signal with a preset frame length from the initial gesture vibration signal according to a preset frame moving step length, and calculating the signal energy of the gesture vibration signal with the preset frame length;
if the fact that the signal energy of the extracted preset frame length gesture vibration signal is larger than a first energy threshold is detected, taking an extraction point which is larger than the first energy threshold as a target starting endpoint; after a target starting endpoint is detected, continuously extracting a gesture vibration signal with the preset frame length and calculating a signal energy value;
if the signal energy of the extracted preset frame length gesture vibration signal is detected to be smaller than a second energy threshold value in the continuous extraction process, taking an extraction point which is smaller than the second energy threshold value as a target termination endpoint;
the values of the first energy threshold and the second energy threshold are the same or the threshold difference is within a preset range.
5. The method of claim 2, wherein intercepting valid target gesture vibration signals from the initial gesture vibration signal as a function of the target start endpoint and target end endpoint comprises:
and if the length between the end points between the two end points is determined to be within the range of the preset end point length threshold value according to the target starting end point and the target ending end point, intercepting an effective target gesture vibration signal from the initial gesture vibration signal according to the target starting end point and the target ending end point.
6. The method of claim 2, prior to determining a target start endpoint and a target end endpoint of the gesture vibration signal that effectively characterize the volitionally written character information in the initial gesture vibration signal, further comprising:
and filtering the initial gesture vibration signal by adopting a preset high-pass filter to obtain the initial gesture vibration signal after filtering.
7. The method of claim 1, wherein determining the gesture type when characters are written over the air by performing vibration feature recognition on the target gesture vibration signal comprises:
extracting time domain characteristics and frequency domain characteristics of the target gesture vibration signal to obtain vibration signal fusion characteristics fusing the time domain characteristics and the frequency domain characteristics;
and inputting the vibration signal fusion characteristics of the target gesture vibration signals into a pre-trained gesture classification model, and outputting gesture types when characters are written in a high-altitude mode.
8. A character input device, configured for an electronic device, the device comprising:
the initial signal determining module is used for determining an initial gesture vibration signal generated in the process of executing characters written in air through gestures;
the target signal extraction module is used for extracting a target gesture vibration signal for effectively representing information of characters written in air from the initial gesture vibration signal;
and the gesture character input module is used for identifying vibration characteristics of the target gesture vibration signal, determining a gesture type when characters are written in the air and executing character input operation based on the gesture type.
9. An electronic device, comprising:
one or more processors;
storage means for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement the character input method of any of claims 1-7.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the character input method of any one of claims 1 to 7.
CN202011270920.7A 2020-11-13 2020-11-13 Character input method, character input device, electronic equipment and storage medium Pending CN112363622A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011270920.7A CN112363622A (en) 2020-11-13 2020-11-13 Character input method, character input device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011270920.7A CN112363622A (en) 2020-11-13 2020-11-13 Character input method, character input device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN112363622A true CN112363622A (en) 2021-02-12

Family

ID=74514967

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011270920.7A Pending CN112363622A (en) 2020-11-13 2020-11-13 Character input method, character input device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112363622A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP4270156A4 (en) * 2021-03-24 2024-04-24 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Gesture data acquisition method and apparatus, terminal, and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110163956A1 (en) * 2008-09-12 2011-07-07 James Franklin Zdralek Bimanual Gesture Based Input and Device Control System
CN103760983A (en) * 2014-01-23 2014-04-30 中国联合网络通信集团有限公司 Virtual gesture input method and gesture collecting device
CN105224066A (en) * 2014-06-03 2016-01-06 北京创思博德科技有限公司 A kind of gesture identification method based on high in the clouds process
CN107316067A (en) * 2017-05-27 2017-11-03 华南理工大学 A kind of aerial hand-written character recognition method based on inertial sensor

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110163956A1 (en) * 2008-09-12 2011-07-07 James Franklin Zdralek Bimanual Gesture Based Input and Device Control System
CN103760983A (en) * 2014-01-23 2014-04-30 中国联合网络通信集团有限公司 Virtual gesture input method and gesture collecting device
CN105224066A (en) * 2014-06-03 2016-01-06 北京创思博德科技有限公司 A kind of gesture identification method based on high in the clouds process
CN107316067A (en) * 2017-05-27 2017-11-03 华南理工大学 A kind of aerial hand-written character recognition method based on inertial sensor

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
卫文韬: "肌电手势识别中的多流融合和多视图深度学习方法研究", 中国博士学位论文全文库, no. 12, 5 December 2018 (2018-12-05), pages 138 - 192 *
张绪豪: "基于77GHz毫米波雷达的手势识别研究", 中国优秀硕士学位论文全文数据库,信息科技辑, no. 6, 15 June 2020 (2020-06-15), pages 132 - 629 *
王原、汤勇明、王保平等: "基于加速度传感器的大手势集手势识别算法改进研究", 传感技术学报, vol. 26, no. 10, 15 October 2013 (2013-10-15), pages 33 - 39 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP4270156A4 (en) * 2021-03-24 2024-04-24 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Gesture data acquisition method and apparatus, terminal, and storage medium

Similar Documents

Publication Publication Date Title
EP3754542B1 (en) Method and apparatus for recognizing handwriting in air, and device and computer-readable storage medium
EP3109797B1 (en) Method for recognising handwriting on a physical surface
CN111857356B (en) Method, device, equipment and storage medium for recognizing interaction gesture
WO2018161906A1 (en) Motion recognition method, device, system and storage medium
US20140168057A1 (en) Gyro aided tap gesture detection
CN109167893B (en) Shot image processing method and device, storage medium and mobile terminal
CN110972112B (en) Subway running direction determining method, device, terminal and storage medium
CN108847941B (en) Identity authentication method, device, terminal and storage medium
CN109840480B (en) Interaction method and interaction system of smart watch
CN111752388A (en) Application control method, device, equipment and storage medium
CN106598231B (en) gesture recognition method and device
CN114397963B (en) Gesture recognition method and device, electronic equipment and storage medium
CN112286360A (en) Method and apparatus for operating a mobile device
CN112363622A (en) Character input method, character input device, electronic equipment and storage medium
CN111831116A (en) Intelligent equipment interaction method based on PPG information
CN113342170A (en) Gesture control method, device, terminal and storage medium
CN117666791A (en) Gesture control dual authentication method and device, electronic equipment and storage medium
CN109725722B (en) Gesture control method and device for screen equipment
WO2018014432A1 (en) Voice application triggering control method, device and terminal
CN117389454B (en) Finger joint operation identification method and electronic equipment
WO2022099588A1 (en) Character input method and apparatus, electronic device, and storage medium
CN111766941B (en) Gesture recognition method and system based on intelligent ring
CN113642493A (en) Gesture recognition method, device, equipment and medium
KR20190028675A (en) Method and apparatus for recognizing motion to be considered noise
CN111913574B (en) Method, apparatus, electronic device, and computer-readable medium for controlling device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20240325

Address after: Room 8032, 8th Floor, Building B, Vitality Business Plaza, No. 185 Jumao Street, Yuanhe Street, Xiangcheng District, Suzhou City, Jiangsu Province, 215000

Applicant after: Suzhou Waibing Intelligent Technology Co.,Ltd.

Country or region after: China

Applicant after: Chen Wenqiang

Address before: 2906h, block B, Zhongshen garden, no.2010 CaiTian Road, Fushan community, Futian street, Futian District, Shenzhen, Guangdong 518000

Applicant before: Shenzhen Zhenke Intelligent Technology Co.,Ltd.

Country or region before: China

TA01 Transfer of patent application right