CN115988130A - Method for identifying input content in motion state based on mobile phone sensor - Google Patents

Method for identifying input content in motion state based on mobile phone sensor Download PDF

Info

Publication number
CN115988130A
CN115988130A CN202211652973.4A CN202211652973A CN115988130A CN 115988130 A CN115988130 A CN 115988130A CN 202211652973 A CN202211652973 A CN 202211652973A CN 115988130 A CN115988130 A CN 115988130A
Authority
CN
China
Prior art keywords
data
motion state
click position
neural network
input
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211652973.4A
Other languages
Chinese (zh)
Other versions
CN115988130B (en
Inventor
朱飑凯
袁纬杰
刘蔚
游紫绒
鲍玉奥
曾倩倩
李峰
张倩
刘三满
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanxi Police College
Original Assignee
Shanxi Police College
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanxi Police College filed Critical Shanxi Police College
Priority to CN202211652973.4A priority Critical patent/CN115988130B/en
Publication of CN115988130A publication Critical patent/CN115988130A/en
Application granted granted Critical
Publication of CN115988130B publication Critical patent/CN115988130B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • User Interface Of Digital Computer (AREA)
  • Input From Keyboards Or The Like (AREA)

Abstract

The invention relates to an input identification method of a smart phone in a motion state, in particular to a method for identifying input contents in the motion state based on a mobile phone sensor, which comprises the following steps: step 1: collecting data output by a sensor when the soft keyboard is input; and 2, step: training the multilayer perceptron neural network, and outputting a click position sequence by the multilayer perceptron neural network; and step 3: inputting the click position sequence into a reduction algorithm, and converting the click position sequence into reference content; and 4, step 4: when the accuracy rate of the reference content converted from the click position sequence output by the neural network of the multilayer perceptron reaches a set value, the neural network of the multilayer perceptron reaches the use requirement; and 5: the converted reference content is displayed in a parameter frame at one side of the soft keyboard, and a user selects according to the reference content in the reference frame and the input content in the input frame. The method improves the accuracy of single-hand input in the motion state and improves the single-hand input experience of the user.

Description

Method for identifying input content in motion state based on mobile phone sensor
Technical Field
The invention relates to an input identification method of a smart phone in a motion state, in particular to a method for identifying input contents in the motion state based on a mobile phone sensor.
Background
The traditional non-intelligent mobile phone is provided with solid keys, and the small body is convenient for single-hand operation. However, the smart phone with a touch function on a screen and without physical keys is popular nowadays, and when the smart phone needs to input, the input is performed by calling a soft keyboard. The soft keyboard mode is more suitable for operation by two hands, one hand is used for lifting, and the other hand is used for clicking position input. However, in life, a scene of single-hand input is ubiquitous, for example, one hand holds an article, and the other hand only holds a smart phone. The solution for soft keyboards is to reduce the soft keyboard area and favor dominant hands. The method has the defects that the clickable range is reduced, the false touch is easy to cause, and the input accuracy is reduced. The prior invention calculates the clicking area by utilizing the offset of a gyroscope to obtain the determination of an input position and restore the input content. But the defects are that the input content can not be accurately restored in the motion state, and the use experience of the user can not be influenced by the accurate input with one hand.
Disclosure of Invention
The invention provides a method for identifying input contents in a motion state based on a mobile phone sensor, aiming at the problem of low input precision when a smart phone terminal is operated by one hand in the motion state.
A method for recognizing input content in a motion state based on a mobile phone sensor comprises the following steps:
step 1: dividing the soft keyboard into a plurality of different areas;
step 2: data collection and preprocessing: collecting data about click positions output by a sensor when a user inputs data through a soft keyboard in a motion state; preprocessing the data after collecting the data, and dividing the data into a training set and a testing set according to a proportion after completing the preprocessing;
and step 3: sensing the motion state and judging the click position: training the multilayer perceptron neural network through training set data, and outputting a click position sequence by the multilayer perceptron neural network;
and 4, step 4: and (3) restoring the input content: after the click position sequence is obtained, inputting the click position sequence into a reduction algorithm, and converting the click position sequence into reference content;
and 5: testing the multilayer perceptron neural network through the test set data, wherein when the accuracy rate of the reference content converted from the click position sequence output by the multilayer perceptron neural network reaches a set value, the multilayer perceptron neural network meets the use requirement, otherwise, the multilayer perceptron neural network is continuously trained;
and 6: the converted reference content is displayed in a parameter frame at one side of the soft keyboard and is used as the reference of the input content in the input frame of the soft keyboard, and a user selects according to the reference content in the reference frame and the input content in the input frame, so that the accuracy rate of the input content is improved.
The method for identifying the input content based on the mobile phone sensor in the motion state comprises the following steps in step 1:
step 11: sensors include, but are not limited to, acceleration sensors and gyroscope sensors.
Step 12: the motion states include, but are not limited to, standing, sitting still, lying down, walking, going upstairs and downstairs.
Step 13: the data segmentation ratio is 7:3, wherein the data ratio of the training set is 7, and the data ratio of the test set is 3.
Step 14: data preprocessing includes conversion of raw data into sensor window events, eigenvalue selection, and low pass filtering.
1) Window events: converting the original data into data of each short time segment through a window event;
2) Low-pass filtering: filtering high-frequency information by data in each short time period, and keeping low-frequency click information;
3) And (3) calculating a characteristic value: and calculating the mean value and the maximum value of each short-time-period data to enable the classification characteristics to be convex, so that the calculation of the neural network of the multilayer perceptron is facilitated.
In the above method for identifying input content in a motion state based on a mobile phone sensor, step 2 specifically includes the following steps:
step 21: and collecting the preprocessed sensor data, inputting the sensor data as input into a neural network of a multilayer perceptron for training, and predicting after the training is finished.
Step 22: in the training stage, after the multilayer perceptron neural network receives input data, the motion state and the click position are classified through calculation of the neural network. And if the motion state, the click position classification result and the motion state are consistent with the region where the click position is located, recording the click position code and the motion state code. And if the motion state, the click position classification result and the region where the motion state and the click position are located are inconsistent, adjusting the weight of the neural network, and performing training again. Training is completed until the learning rate and the accuracy rate reach higher. In the prediction stage, after receiving data input, the multi-layer perceptron neural network outputs and records the click position code and the motion state code according to the weight and the characteristics of the input data through the calculation of the neural network.
And 23, corresponding the motion state code with the motion state and outputting the motion state. And sequencing the recorded click position codes and restoring the click position codes into a click position sequence.
Each motion state is set with a code corresponding to the motion state, each area divided on the soft keyboard is set with a code corresponding to the motion state, and the sequence of the clicking positions represents the arrangement sequence of the areas where the clicking positions are located.
In the method for identifying the input content in the motion state based on the mobile phone sensor, step 3 specifically includes the following steps:
step 31: the reduction algorithm reduces the click position sequence into input information;
step 32: a natural language processing method is added into the reduction algorithm, so that the reduction accuracy is improved, and the retrieval range of the content is reduced.
In the method for identifying the input content in the motion state based on the mobile phone sensor, in order to improve the accuracy of judging the click position, the keyboard is divided into five regions, the multilayer perceptron neural network can classify the click position into the regions, and finally the motion state and the click position sequence are output.
The method improves the accuracy of single-hand input in the motion state and improves the single-hand input experience of the user.
Drawings
FIG. 1 is a flow chart of the method of the present invention.
FIG. 2 is a diagram of the partitioning of the soft keyboard according to the method of the present invention.
FIG. 3 is a numeric keyboard diagram of the method of the present invention.
Detailed Description
A method for recognizing input content in a motion state based on a mobile phone sensor comprises the following steps:
step 1: dividing the soft keyboard into a plurality of different areas;
step 2: data collection and preprocessing: collecting data about click positions output by a sensor when a user inputs data through a soft keyboard in a motion state; preprocessing the data after collecting the data, and dividing the data into a training set and a testing set according to a proportion after completing the preprocessing; in order to improve the accuracy of the click position judgment, the keyboard is divided into five regions, the multi-layer perceptron neural network can classify the click positions into the regions, and finally the motion state and the click position sequence are output;
and step 3: sensing the motion state and judging the click position: training the multilayer perceptron neural network through training set data, and outputting a click position sequence by the multilayer perceptron neural network;
and 4, step 4: and (3) restoring the input content: after the click position sequence is obtained, inputting the click position sequence into a reduction algorithm, and converting the click position sequence into reference content;
and 5: testing the multilayer perceptron neural network through the test set data, wherein when the accuracy rate of the reference content converted from the click position sequence output by the multilayer perceptron neural network reaches a set value, the multilayer perceptron neural network meets the use requirement, otherwise, the multilayer perceptron neural network is continuously trained;
step 6: the converted reference content is displayed in a parameter frame at one side of the soft keyboard and is used as the reference of the input content in the input frame of the soft keyboard, and a user selects according to the reference content in the reference frame and the input content in the input frame, so that the accuracy rate of the input content is improved.
The method for identifying the input content based on the mobile phone sensor in the motion state specifically comprises the following steps in step 1:
step 11: sensors include, but are not limited to, acceleration sensors and gyroscope sensors.
Step 12: the motion states include, but are not limited to, standing, sitting still, lying down, walking, going upstairs and downstairs.
Step 13: the data segmentation ratio is 7:3, wherein the data ratio of the training set is 7, and the data ratio of the test set is 3.
Step 14: data preprocessing includes conversion of raw data into sensor window events, eigenvalue selection, and low pass filtering.
1) Window events: converting the original data into data of each short time segment through a window event;
2) Low-pass filtering: filtering high-frequency information by data in each short time period, and keeping low-frequency click information;
3) And (3) calculating a characteristic value: and calculating the mean value and the maximum value of each short-time-period data to enable the classification characteristics to be convex, so that the calculation of the neural network of the multilayer perceptron is facilitated.
In the above method for identifying input content in a motion state based on a mobile phone sensor, step 2 specifically includes the following steps:
step 21: and collecting the preprocessed sensor data, inputting the sensor data as input into a neural network of a multilayer perceptron for training, and predicting after the training is finished.
Step 22: in the training stage, after the multilayer perceptron neural network receives input data, the motion state and the click position are classified through calculation of the neural network. And if the motion state, the click position classification result and the motion state are consistent with the region where the click position is located, recording the click position code and the motion state code. And if the motion state, the click position classification result and the region where the motion state and the click position are not consistent, adjusting the weight of the neural network, and training again. Training is completed until the learning rate and the accuracy rate reach higher. In the prediction stage, after receiving data input, the multi-layer perceptron neural network outputs and records the click position code and the motion state code according to the weight and the characteristics of the input data through the calculation of the neural network.
And 23, corresponding the motion state code with the motion state and outputting the motion state. And sequencing the recorded click position codes and restoring the click position codes into a click position sequence.
In the above method for recognizing input content in a motion state based on a mobile phone sensor, step 3 specifically includes the following steps:
step 31: the reduction algorithm reduces the click position sequence representing the combined sequencing of the areas into input information;
step 32: a natural language processing method is added into the reduction algorithm, so that the reduction accuracy is improved, and the retrieval range of the content is reduced.
In the method for identifying the input content in the motion state based on the mobile phone sensor, in order to improve the accuracy of judging the click position, the keyboard is divided into five regions, the multilayer perceptron neural network can classify the click position into the regions, and finally the motion state and the click position sequence are output.
The above embodiments are only used for illustration and not for limiting the technical solutions described in the present invention; thus, although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those skilled in the art that the present invention may be modified and equivalents substituted; all such modifications and variations are intended to be included herein within the scope of this disclosure and the present invention and protected by the following claims.

Claims (5)

1. A method for recognizing input content in a motion state based on a mobile phone sensor is characterized in that: the method comprises the following steps:
step 1: dividing the soft keyboard into a plurality of different areas;
step 2: data collection and preprocessing: collecting data about click positions output by a sensor when a user inputs data through a soft keyboard in a motion state; preprocessing the data after collecting the data, and dividing the data into a training set and a testing set according to a proportion after completing the preprocessing;
and step 3: sensing the motion state and judging the click position: training the multilayer perceptron neural network through training set data, and outputting a click position sequence by the multilayer perceptron neural network;
and 4, step 4: and (3) restoring the input content: after the click position sequence is obtained, inputting the click position sequence into a reduction algorithm, and converting the click position sequence into reference content;
and 5: testing the multilayer perceptron neural network through the test set data, wherein when the accuracy rate of the reference content converted from the click position sequence output by the multilayer perceptron neural network reaches a set value, the multilayer perceptron neural network meets the use requirement, otherwise, the multilayer perceptron neural network is continuously trained;
step 6: the converted reference content is displayed in a parameter frame at one side of the soft keyboard and is used as the reference of the input content in the input frame of the soft keyboard, and a user selects according to the reference content in the reference frame and the input content in the input frame, so that the accuracy rate of the input content is improved.
2. The method for recognizing the input content in the motion state based on the mobile phone sensor as claimed in claim 1, wherein: the step 1 specifically comprises the following steps:
step 11: sensors include, but are not limited to, acceleration sensors and gyroscope sensors;
step 12: motion states include, but are not limited to, standing, sitting still, lying down, walking, going upstairs and downstairs;
step 13: the data segmentation ratio is 7:3, wherein the data proportion of the training set is 7, and the data proportion of the testing set is 3;
step 14: the data preprocessing comprises the steps of converting raw data into sensor window events, selecting characteristic values and carrying out low-pass filtering;
1) Window events: converting the original data into data of each short time segment through a window event;
2) Low-pass filtering: filtering high-frequency information by data in each short time period, and keeping click information of low frequency;
3) And (3) calculating a characteristic value: and calculating the mean value and the maximum value of each short-time-period data to enable the classification characteristics to be convex, so that the calculation of the neural network of the multilayer perceptron is facilitated.
3. The method for identifying input content in a motion state based on a mobile phone sensor as claimed in claim 1 or 2, wherein: the step 2 specifically comprises the following steps:
step 21: collecting the preprocessed sensor data, inputting the sensor data as input into a neural network of a multilayer perceptron for training, and predicting after the training is finished;
step 22: in the training stage, after receiving input data, the neural network of the multi-layer perceptron classifies the motion state and the click position through the calculation of the neural network, and if the classification result of the motion state and the click position is consistent with the motion state and the area where the click position is located, the click position code and the motion state code are recorded;
if the motion state, the click position classification result and the motion state are inconsistent with the area where the click position is located, the weight of the neural network is adjusted, training is performed again, the training is completed until the learning rate and the accuracy rate are high, and in the prediction stage, after the multilayer perceptron neural network receives data input, through calculation of the neural network, the click position code and the motion state code are output and recorded according to the weight and the characteristics of the input data;
and 23, corresponding the motion state codes with the motion states, outputting the motion states, sequencing the recorded click position codes, and restoring the sequences into click position sequences.
4. The method for identifying input content in a motion state based on a mobile phone sensor as claimed in claim 1 or 2, wherein: the step 3 specifically comprises the following steps:
step 31: the reduction algorithm reduces the click position sequence of the letters/numbers in the combined area into input information;
step 32: a natural language processing method is added into the reduction algorithm, so that the reduction accuracy is improved, and the retrieval range of the content is reduced.
5. The method for recognizing the input content in the motion state based on the mobile phone sensor as claimed in claim 1 or 2, wherein: in order to improve the accuracy of the click position judgment, the soft keyboard is divided into five regions, the multilayer perceptron neural network can classify the click positions into the regions, and finally a click position sequence is output.
CN202211652973.4A 2022-12-22 2022-12-22 Method for identifying input content in motion state based on mobile phone sensor Active CN115988130B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211652973.4A CN115988130B (en) 2022-12-22 2022-12-22 Method for identifying input content in motion state based on mobile phone sensor

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211652973.4A CN115988130B (en) 2022-12-22 2022-12-22 Method for identifying input content in motion state based on mobile phone sensor

Publications (2)

Publication Number Publication Date
CN115988130A true CN115988130A (en) 2023-04-18
CN115988130B CN115988130B (en) 2024-08-16

Family

ID=85966034

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211652973.4A Active CN115988130B (en) 2022-12-22 2022-12-22 Method for identifying input content in motion state based on mobile phone sensor

Country Status (1)

Country Link
CN (1) CN115988130B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104850846A (en) * 2015-06-02 2015-08-19 深圳大学 Human behavior recognition method and human behavior recognition system based on depth neural network
CN105378606A (en) * 2013-05-03 2016-03-02 谷歌公司 Alternative hypothesis error correction for gesture typing
CN107229348A (en) * 2016-03-23 2017-10-03 北京搜狗科技发展有限公司 A kind of input error correction method, device and the device for inputting error correction
CN107837087A (en) * 2017-12-08 2018-03-27 兰州理工大学 A kind of human motion state recognition methods based on smart mobile phone
CN108182001A (en) * 2017-12-28 2018-06-19 科大讯飞股份有限公司 Input error correction method and device, storage medium and electronic equipment
CN110488990A (en) * 2019-08-12 2019-11-22 腾讯科技(深圳)有限公司 Input error correction method and device
CN110795019A (en) * 2019-10-23 2020-02-14 腾讯科技(深圳)有限公司 Key identification method and device of soft keyboard and storage medium
CN114356110A (en) * 2021-11-25 2022-04-15 科大讯飞股份有限公司 Input error correction method and related device, input equipment and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105378606A (en) * 2013-05-03 2016-03-02 谷歌公司 Alternative hypothesis error correction for gesture typing
CN104850846A (en) * 2015-06-02 2015-08-19 深圳大学 Human behavior recognition method and human behavior recognition system based on depth neural network
CN107229348A (en) * 2016-03-23 2017-10-03 北京搜狗科技发展有限公司 A kind of input error correction method, device and the device for inputting error correction
CN107837087A (en) * 2017-12-08 2018-03-27 兰州理工大学 A kind of human motion state recognition methods based on smart mobile phone
CN108182001A (en) * 2017-12-28 2018-06-19 科大讯飞股份有限公司 Input error correction method and device, storage medium and electronic equipment
CN110488990A (en) * 2019-08-12 2019-11-22 腾讯科技(深圳)有限公司 Input error correction method and device
CN110795019A (en) * 2019-10-23 2020-02-14 腾讯科技(深圳)有限公司 Key identification method and device of soft keyboard and storage medium
CN114356110A (en) * 2021-11-25 2022-04-15 科大讯飞股份有限公司 Input error correction method and related device, input equipment and storage medium

Also Published As

Publication number Publication date
CN115988130B (en) 2024-08-16

Similar Documents

Publication Publication Date Title
CN109800648B (en) Face detection and recognition method and device based on face key point correction
CN109190442B (en) Rapid face detection method based on deep cascade convolution neural network
CN116484307B (en) Cloud computing-based intelligent ring remote control method
CN111027487A (en) Behavior recognition system, method, medium, and apparatus based on multi-convolution kernel residual network
CN101641660B (en) Apparatus and method product providing a hierarchical approach to command-control tasks using a brain-computer interface
JP4697670B2 (en) Identification data learning system, learning device, identification device, and learning method
CN106407874A (en) Handwriting recognition method based on handwriting coordinate sequence
CN102707806B (en) Motion recognition method based on acceleration sensor
US11416717B2 (en) Classification model building apparatus and classification model building method thereof
CN112148128A (en) Real-time gesture recognition method and device and man-machine interaction system
CN110674875A (en) Pedestrian motion mode identification method based on deep hybrid model
CN113312989B (en) Finger vein feature extraction network based on aggregated descriptors and attention
CN114565048A (en) Three-stage pest image identification method based on adaptive feature fusion pyramid network
CN112288137A (en) LSTM short-term load prediction method and device considering electricity price and Attention mechanism
CN109919055B (en) Dynamic human face emotion recognition method based on AdaBoost-KNN
JP2022120775A (en) On-device activity recognition
CN108960430A (en) The method and apparatus for generating personalized classifier for human body motor activity
CN111931616A (en) Emotion recognition method and system based on mobile intelligent terminal sensor equipment
CN111753683A (en) Human body posture identification method based on multi-expert convolutional neural network
CN116821809A (en) Vital sign data acquisition system based on artificial intelligence
CN109949827A (en) A kind of room acoustics Activity recognition method based on deep learning and intensified learning
CN115988130A (en) Method for identifying input content in motion state based on mobile phone sensor
CN112347162A (en) Multivariate time sequence data rule mining method based on online learning
CN116680613A (en) Human activity recognition comprehensive optimization method based on multi-scale metric learning
CN114998731A (en) Intelligent terminal navigation scene perception identification method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant