CN106648040B - Terminal control method and device - Google Patents

Terminal control method and device Download PDF

Info

Publication number
CN106648040B
CN106648040B CN201510730602.7A CN201510730602A CN106648040B CN 106648040 B CN106648040 B CN 106648040B CN 201510730602 A CN201510730602 A CN 201510730602A CN 106648040 B CN106648040 B CN 106648040B
Authority
CN
China
Prior art keywords
action
control instruction
user
state information
current
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201510730602.7A
Other languages
Chinese (zh)
Other versions
CN106648040A (en
Inventor
雷建军
陈果
冯瑶
邹亚琳
李东奇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Chongqing University of Post and Telecommunications
Original Assignee
Tencent Technology Shenzhen Co Ltd
Chongqing University of Post and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd, Chongqing University of Post and Telecommunications filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201510730602.7A priority Critical patent/CN106648040B/en
Publication of CN106648040A publication Critical patent/CN106648040A/en
Application granted granted Critical
Publication of CN106648040B publication Critical patent/CN106648040B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G08SIGNALLING
    • G08CTRANSMISSION SYSTEMS FOR MEASURED VALUES, CONTROL OR SIMILAR SIGNALS
    • G08C17/00Arrangements for transmitting signals characterised by the use of a wireless electrical link
    • G08C17/02Arrangements for transmitting signals characterised by the use of a wireless electrical link using a radio link
    • GPHYSICS
    • G08SIGNALLING
    • G08CTRANSMISSION SYSTEMS FOR MEASURED VALUES, CONTROL OR SIMILAR SIGNALS
    • G08C23/00Non-electrical signal transmission systems, e.g. optical systems
    • G08C23/04Non-electrical signal transmission systems, e.g. optical systems using light waves, e.g. infrared

Abstract

The embodiment of the invention discloses a terminal control method and a device, wherein the method comprises the following steps: determining a pre-action among a plurality of actions performed when a user inputs a control instruction; detecting a current action of a user; comparing the current action with the pre-action; and when the current action is matched with the preposed action, executing the control instruction. By adopting the embodiment of the invention, the operation steps of the user can be simplified, and the control on the terminal can be conveniently realized.

Description

Terminal control method and device
Technical Field
The invention relates to the technical field of human-computer interaction, in particular to a terminal control method and device.
Background
Household appliances occupy a very important position in the life of people, and along with the improvement of living standard, people increasingly hope that household appliances are more intelligent, easier to use and more useful. For example, the infrared remote controller of a television facilitates life and entertainment of people, and also develops the habit that people control household appliances to use remote control instead of direct contact control.
At present, the traditional household appliance controller operates the household appliance in an infrared mode. Specifically, the conventional household electrical appliance controller comprises a panel with buttons, an encoder and an infrared emitter. The household appliance controller generates a control instruction according to the button operation on the panel by a user, codes the control instruction through the encoder, and sends an infrared signal in an infrared coding format to the household appliance through the infrared transmitter. And after receiving the recognizable infrared signal, the household appliance performs control operation according to the infrared signal.
however, since the existing household appliances are all realized by the traditional household appliance remote controller, for example, the air conditioner needs an air conditioner remote controller, the television needs a television remote controller, and the DVD needs a DVD remote controller. When a user needs to control a certain household appliance, the user needs to find a corresponding remote controller, so that the operation is inconvenient, and the user experience is influenced.
Disclosure of Invention
the technical problem to be solved by the embodiments of the present invention is to provide a terminal control method and device, which can simplify the operation steps of a user and conveniently realize the control of a terminal.
In order to solve the above technical problem, an embodiment of the present invention provides a terminal control method, including:
Determining a pre-action among a plurality of actions performed when a user inputs a control instruction;
detecting a current action of the user;
comparing the current action with the lead action;
And when the current action is matched with the preposed action, executing the control instruction.
Correspondingly, an embodiment of the present invention further provides a terminal control apparatus, including:
The device comprises a preposed action determining module, a pre-action determining module and a pre-action determining module, wherein the preposed action determining module is used for determining a preposed action in a plurality of actions performed when a user inputs a control instruction;
a current action detection module for detecting the current action of the user;
A comparison module for comparing the current action with the pre-action;
And the control instruction execution module is used for executing the control instruction when the current action is matched with the preposed action.
by implementing the embodiment of the invention, the preposed action is determined in a plurality of actions performed when the user inputs the control command, the current action of the user is detected, the current action is compared with the preposed action, and the control command is executed when the current action is matched with the preposed action, so that the operation steps of the user can be simplified, and the control on the terminal can be conveniently realized.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic diagram of a framework of a terminal control system provided in an embodiment of the present invention;
fig. 2 is a schematic flowchart of a terminal control method provided in an embodiment of the present invention;
Fig. 3 is a flowchart illustrating a terminal control method according to another embodiment of the present invention;
FIG. 4A is a schematic diagram of determining a preamble action provided in an embodiment of the present invention;
FIG. 4B is a schematic diagram of a method for determining a preamble action according to another embodiment of the present invention;
Fig. 5 is a schematic structural diagram of a terminal control device provided in an embodiment of the present invention.
Detailed Description
the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
the terminal control method can be operated in a mobile terminal such as a tablet Computer, a mobile phone, a Personal Computer (PC), a notebook Computer or a network television.
referring to fig. 1, fig. 1 is a schematic diagram of a framework of a terminal control system in an embodiment of the present invention, where the terminal control system in the embodiment of the present invention at least includes a mobile terminal and a wearable device. The mobile terminal can establish Bluetooth connection or infrared connection with the wearable device. Wearing equipment can be intelligent bracelet, intelligent wrist-watch or intelligent gloves etc.. Wherein:
the wearable device is used for detecting state information when a user inputs a control instruction and sending the detected state information to the mobile terminal, and the state information comprises acceleration data and/or angle data.
in the concrete realization, wearing equipment can include motion data acquisition module and bluetooth module, and motion data acquisition module specifically can include acceleration collector and/or angle collector, for example accelerometer or gyroscope etc. wearing equipment can gather acceleration data through acceleration collector, gathers angle data through angle collector. Through analysis, the hand is the position where the person carries out daily activities with the highest frequency of use, the sensitivity is strongest, and the action difference of hand is big, the particularity is strong, when the user wears wearing equipment and is carrying out daily activities, the motion data acquisition module in the wearing equipment gathers the acceleration data and/or the angular velocity data of hand in real time, wearing equipment still can carry out the bluetooth through bluetooth module and terminal and be connected, send acceleration data and/or angular velocity data for mobile terminal through the bluetooth connection of establishing.
Optionally, before the wearable device collects the state information of the hand through the motion data collection module, the wearable device may receive an action detection instruction, wherein the user may submit the action detection instruction by clicking a physical key or a virtual key of the wearable device, the wearable device may also receive the action detection instruction sent by the terminal, and the action detection instruction sent by the terminal may be submitted by the user by "shaking" the terminal, which is not specifically limited by the embodiment of the present invention.
and the mobile terminal is used for analyzing and processing the state information, acquiring at least one characteristic value, taking the at least one characteristic value as the input of a preset neural network algorithm, obtaining a plurality of actions performed by the user when inputting a control instruction, and determining a preposed action in the plurality of actions.
the wearable device is further used for detecting current state information of the user and sending the detected current state information to the mobile terminal. In the concrete realization, wearing equipment can detect user's current action information through motion data acquisition module, and further, wearing equipment can detect user's acceleration data through acceleration collector, detects user's angle data through angle collector. Optionally, the wearable device may determine whether an action detection instruction is received, and detect current state information of the user in real time when the action detection instruction is received.
and the mobile terminal is also used for analyzing and processing the current state information, acquiring the current action of the user, comparing the current action with the preposed action, and executing a control instruction when the current action is matched with the preposed action. Specifically, after receiving the current state information sent by the wearable device, the mobile terminal can analyze and process the current state information to obtain at least one characteristic value, the at least one characteristic value is used as the input of a preset neural network algorithm to obtain the current action of the user, the current action is compared with the pre-action, and if the current action is matched with the pre-action, a control instruction is executed; if the current action does not match the previous action, the mobile terminal may delete the current state information.
in an optional embodiment, the terminal control system in the embodiment of the present invention may further include an operation terminal, and a communication connection, for example, a WIFI connection or a wired connection, may be established between the operation terminal and the mobile terminal. The operation terminal may include an intelligent appliance, a tablet computer, a mobile phone, a personal computer or a notebook computer, and the like, and the intelligent appliance may include a digital television receiving terminal, a lamp, a refrigerator, and the like. After comparing the current action with the previous action, the mobile terminal may further perform the following operations:
and when the current action is matched with the preposed action, sending a control instruction to the operation terminal.
And the operation terminal is used for executing the control instruction.
in specific implementation, the operation terminal can include a WIFI module and an instruction execution module, the operation terminal can establish communication connection with the mobile terminal through the WIFI module, receive a control instruction sent by the mobile terminal, and the operation terminal can execute the control instruction through the instruction execution module.
in an optional embodiment, the terminal control system in the embodiment of the present invention may further include a cloud server, and a communication connection may be established between the cloud server and the mobile terminal, for example, a wireless local area network connection or a wired connection is established through a network.
the mobile terminal takes at least one characteristic value as an input of a preset neural network algorithm to obtain a plurality of actions performed by a user when inputting a control instruction, and determines a preposed action in the plurality of actions, which may specifically be:
The mobile terminal is further used for sending the at least one characteristic value to the cloud server.
And the cloud server is used for taking at least one characteristic value as the input of a preset neural network algorithm, obtaining a plurality of actions performed by the user when inputting the control instruction, determining a preposed action in the actions, and sending the determined preposed action to the mobile terminal.
And the mobile terminal is also used for establishing the corresponding relation between the preposed action and the control instruction.
In an optional embodiment, the wearable device receives an action detection instruction submitted by a user when inputting a control instruction, responds to the action detection instruction to detect state information, and sends the state information to the mobile terminal.
and the mobile terminal processes the state information to obtain at least one characteristic value and sends the at least one characteristic value to the cloud server.
The cloud server takes at least one characteristic value as input of a preset neural network algorithm to obtain a plurality of actions performed by a user when a control instruction is input, the action detected firstly when the action detection instruction is responded is obtained from the actions, and the action detected firstly is determined as a front action.
in an optional embodiment, the wearable device receives state information in a process of repeatedly inputting the control command by the user, and sends the state information to the mobile terminal.
and the mobile terminal processes the state information received each time to obtain at least one characteristic value and sends the at least one characteristic value to the cloud server.
the cloud server takes at least one characteristic value as input of a preset neural network algorithm to obtain a plurality of actions performed by a user when the user inputs a control instruction, compares each detected action in the process of inputting the control instruction by the user, obtains the same action in the process of inputting the control instruction by the user, and determines the action detected firstly in the same action as a preposed action.
in the terminal control system shown in fig. 1, a wearable device detects state information when a user inputs a control instruction, sends the detected state information to a mobile terminal, the mobile terminal analyzes the state information to obtain at least one characteristic value, the at least one characteristic value is used as an input of a preset neural network algorithm to obtain a plurality of actions performed by the user when the user inputs the control instruction, and a pre-action is determined in the plurality of actions. The wearable device detects the current state information of the user and sends the detected current state information to the mobile terminal. The mobile terminal analyzes and processes the current state information, acquires the current action of the user, compares the current action with the preposed action, and executes a control instruction when the current action is matched with the preposed action, so that the operation steps of the user can be simplified, and the mobile terminal can be conveniently controlled.
referring to fig. 2, fig. 2 is a schematic flow chart of a terminal control method in an embodiment of the present invention, where the terminal control method in the embodiment of the present invention includes:
s201, the wearable device detects state information when a user inputs a control instruction, wherein the state information comprises acceleration data and/or angle data.
the wearable device may detect status information when a user inputs a control instruction, the status information including acceleration data and/or angle data. The control instruction may be used to instruct the control operation terminal to perform a specified operation, such as controlling the operation terminal to turn on a light, being in a power-on state, or adjusting a temperature.
In an optional embodiment, the wearable device may receive a motion detection instruction submitted by a user when inputting the control instruction, and detect the state information in response to the motion detection instruction. Specifically, the user can submit the action detection instruction by clicking the physical key or the virtual key, optionally, the user can input the action detection instruction by the microphone of the mobile terminal, and then the mobile terminal sends the action detection instruction to the wearable device, optionally, the user can also submit the action detection instruction by swinging the wearable device and other modes. The embodiment of the invention detects the state information when receiving the action detection instruction, avoids the wearable device from acquiring the state information in real time, can improve the detection efficiency of the wearable device on the state information, and saves resources.
in an optional embodiment, the wearable device may acquire state information in a process of repeatedly inputting the control instruction by the user, and send the detected state information in the process of inputting the control instruction by the user to the mobile terminal each time. For example, the wearable device may detect state information of the user when the user inputs the control instruction for the first time, and send the state information detected for the first time to the mobile terminal. The wearable device can also detect the state information of the user when inputting the control instruction for the second time, and sends the state information detected for the second time to the mobile terminal. Preferably, before the wearable device acquires the state information in the process of repeatedly inputting the control command by the user, the wearable device may receive a motion detection command submitted by the user when inputting the control command.
and S202, the wearable device sends the detected state information to the mobile terminal.
S203, the mobile terminal analyzes the state information to obtain at least one characteristic value.
The mobile terminal can analyze and process the state information to obtain at least one characteristic value. The characteristic value may include a root mean square of the angle in a preset time period, a sum of the angles in the preset time period, a root mean square of the acceleration in the preset time period, or an integral of the accelerations at different times.
Optionally, when the state information includes acceleration data and angle data, the mobile terminal may perform normalization processing on the acceleration data and the angle data, and store the data obtained through the normalization processing into a global 20 × 6 two-dimensional array, where 20 represents different times, and 6 represents a three-axis acceleration and a three-axis angle, and in order to ensure real-time performance of the data, the mobile terminal may update data stored in the two-dimensional array for the longest time to recently obtained data after the normalization processing, so as to ensure that the data in the two-dimensional array is recently acquired data, and improve accuracy of determining the prepositive action. In specific implementation, the normalization processing mode of the mobile terminal on the acceleration data and the angle data may be as follows: the maximum values of the acceleration and the angle are calculated in the received state information, all the accelerations are divided by the maximum value of the acceleration to perform normalization calculation, and all the angles are divided by the maximum value of the angle to perform normalization calculation. Based on this, all data are within the [ -1,1] interval.
Further optionally, the mobile terminal may obtain a first characteristic value in the two-dimensional array, where the first characteristic value may include a sum of a root mean square of an angle in a preset time period and an angle in the preset time period. Wherein the angle within the preset time period is X1、X2...Xnthen, the root mean square of the angle in the preset time period may be calculated by the following algorithm:The mobile terminal may further obtain a second feature value in the two-dimensional array, where the second feature value may include a root mean square of the acceleration within a preset time period and an integral of the acceleration at different times.
And S204, the mobile terminal sends the at least one characteristic value to a cloud server.
s205, the cloud server takes at least one characteristic value as the input of a preset neural network algorithm, obtains a plurality of actions performed by the user when inputting the control instruction, and determines the preposed action in the plurality of actions.
optionally, after receiving the first characteristic value and the second characteristic value sent by the mobile terminal, the cloud server may use the first characteristic value and the second characteristic value as inputs of a preset neural network algorithm to obtain a plurality of actions performed when the user inputs the control instruction.
Optionally, if the state information is detected when the wearable device responds to the motion detection instruction, after the cloud server obtains the plurality of motions performed when the user inputs the control instruction, the motion detected first when the wearable device responds to the motion detection instruction may be obtained in the plurality of motions, and the motion detected first is determined as the previous motion.
Optionally, if the wearable device detects state information in the process of repeatedly inputting the control instruction by the user, the detected state information in the process of inputting the control instruction by the user is sent to the mobile terminal each time, after the cloud server receives a characteristic value sent by the external control device when the user inputs the control instruction each time, a plurality of actions of the user when the user inputs the control instruction each time can be obtained, the actions of the user when the user inputs the control instruction each time are respectively compared, the same action of the user when the user inputs the control instruction each time is obtained, and the action detected first in the same actions is determined as the preposed action.
in the embodiment of the invention, the cloud server determines the preposed action, so that the load of the mobile terminal can be reduced, and the preposed action determination efficiency is improved.
And S206, the cloud server sends the preposed action to the mobile terminal.
And S207, the mobile terminal stores the preposition action and the control command corresponding to the preposition action.
after receiving the front action sent by the cloud server, the mobile terminal can establish a corresponding relation between the front action and the control instruction, and store the front action and the control instruction corresponding to the front action.
And S208, the wearable device detects the current state information of the user.
The wearable device may detect current state information of the user, wherein the current state information may include acceleration data and/or angle data. Optionally, the wearable device may receive an action detection instruction submitted by the user, and detect the current state information of the user in response to the action detection instruction.
And S209, the wearable device sends the current state information to the mobile terminal.
And S210, the mobile terminal analyzes and processes the current state information to acquire the current action of the user.
The mobile terminal can analyze and process the current state information to obtain at least one characteristic value, and then obtain the current action of the user according to the at least one characteristic value.
Optionally, the mobile terminal may analyze and process the current state information to obtain at least one characteristic value, send the at least one characteristic value to the cloud server, and the cloud server takes the at least one characteristic value as an input of a preset neural network algorithm to obtain a current action of the user, and sends the current action of the user to the mobile terminal.
And S211, comparing the current action with the preposed action by the mobile terminal, and sending the control instruction to the operation terminal corresponding to the control instruction when the current action is matched with the preposed action.
the mobile terminal can compare the current action with the preposed action, and when the current action is matched with the preposed action, the mobile terminal sends the control instruction to the operation terminal corresponding to the control instruction. For example, when the current action matches with the previous action, the mobile terminal may broadcast the control instruction, and the operation terminal receiving the control instruction may compare the control instruction with the locally stored control instruction, and when the control instruction is the same as the locally stored control instruction, it indicates that the operation terminal may recognize the control instruction; when the control instruction is different from the locally stored control instruction, the control instruction cannot be identified by the operation terminal, and the control instruction can be deleted by the operation terminal. For another example, when the current action matches the pre-action, the mobile terminal may obtain the terminal identification information corresponding to the pre-action according to the corresponding relationship between the control instruction, the pre-action, and the terminal identification information, and send the control instruction to the operation terminal corresponding to the terminal identification information.
in an optional embodiment, a user performs a plurality of actions when inputting a control instruction to the operation terminal, and after receiving the control instruction, the operation terminal may send terminal identification information of the operation terminal to the mobile terminal, so that the mobile terminal stores the control instruction, the pre-action, and the terminal identification information corresponding to the pre-action. For another example, when a user inputs a control instruction to the operation terminal, the user performs a plurality of actions, and after receiving the control instruction, the operation terminal may send the control instruction to the mobile terminal, so that the mobile terminal stores the pre-action and the control instruction corresponding to the pre-action, where the control instruction corresponds to the operation terminal, and the control instruction cannot be recognized by other terminals.
S212, the operation terminal executes the control command.
In the terminal control method shown in fig. 2, a wearable device sends detected state information of a user when inputting a control instruction to a mobile terminal, the mobile terminal sends at least one characteristic value obtained by analyzing and processing the state information to a cloud server, the cloud server takes the at least one characteristic value as input of a preset neural network algorithm to obtain a plurality of actions performed by the user when inputting the control instruction, a pre-action is determined among the actions and sent to the mobile terminal, the wearable device detects current state information of the user and sends the detected current state information to the mobile terminal, the mobile terminal analyzes and processes the current state information to obtain the current action of the user, compares the current action with the pre-action, and sends the control instruction to an operation terminal when the current action is matched with the pre-action, the operation terminal executes the control instruction, so that the operation steps of a user can be simplified, and the control of the operation terminal can be conveniently realized.
Referring to fig. 3, fig. 3 is a schematic flow chart of a terminal control method in another embodiment of the present invention, where the terminal control method in the embodiment of the present invention includes:
S301, a pre-action is determined among a plurality of actions performed when a user inputs a control command.
The mobile terminal may determine the pre-action among a plurality of actions performed when the user inputs the control instruction. The pre-action may be an action detected first among a plurality of actions performed when the user inputs the control instruction. For example, the control instruction is used to control the mobile terminal to turn on the light, and the actions performed in the process of controlling the mobile terminal to turn on the light by the user include: with the ascending gesture of slant extension arm, stretch out the forefinger and press towards user's place ahead, then mobile terminal can confirm that leading action is: the arm is extended with a diagonally upward gesture.
In an alternative embodiment, the mobile terminal may receive an action detection instruction submitted by a user when inputting the control instruction, acquire an action first detected when responding to the action detection instruction among the plurality of actions, and determine the action first detected as the leading action. Taking the schematic diagram of determining the pre-action shown in fig. 4A as an example, after the user submits the action detection instruction when inputting the control instruction, the mobile terminal may collect a plurality of actions performed by the user when inputting the control instruction after receiving the action detection instruction, and obtain the action detected first when responding to the action detection instruction from the collected plurality of actions, and determine the action detected first as the pre-action. Specifically, the user may submit the motion detection instruction by clicking a physical key or a virtual key of the mobile terminal, optionally, the user may input the motion detection instruction by using a microphone of the mobile terminal, and optionally, the user may also submit the motion detection instruction by using a "flicking" or other manners.
in an optional embodiment, the mobile terminal may obtain a plurality of actions during the process of repeatedly inputting the control instruction by the user, compare each detected action during the process of inputting the control instruction by the user, respectively, obtain the same action during the process of inputting the control instruction by the user, detected at each time, and determine the action detected first in the same actions as the preceding action. Taking the schematic diagram of determining the pre-action shown in fig. 4B as an example, the user may repeatedly perform multiple actions in the process of inputting the same control command, the mobile terminal obtains the multiple actions in the process of repeatedly inputting the control command by the user, compares each detected action in the process of inputting the control command by the user each time, obtains the same action in the process of inputting the control command by the user each time, and determines the action detected first in the same actions as the pre-action. Illustratively, the actions performed by the user when inputting the control command are represented in binary codes, and the actions of the mobile terminal, which are detected for the first time in the process of inputting the control command by the user, are as follows: 01101011010, the action of the mobile terminal in the process of inputting the control instruction for the second time is detected as follows: 00111011010, the third time the mobile terminal detects that the actions of the user in the process of inputting the control command are: 10101011010, the mobile terminal may compare each detected action, wherein the fifth action and the following actions detected for the first time are the same as the fifth action and the following actions detected for the second time, the third action and the following actions detected for the first time are the same as the third action and the following actions detected for the third time, and the fifth action and the following actions detected for the second time are the same as the fifth action and the following actions detected for the third time, and the mobile terminal may obtain the same action detected each time in the process of inputting the control command by the user as: 1011010, the mobile terminal may then determine the first detected motion in the same motion as the leading motion, i.e., 1.
in an alternative embodiment, after the mobile terminal determines the pre-action from the plurality of actions performed when the user inputs the control instruction, the mobile terminal may establish a correspondence between the pre-action and the control instruction.
S302, detecting the current action of the user.
The mobile terminal may detect a current action of the user. In specific implementation, the mobile terminal may detect the current action of the user through the acceleration collector and/or the angle collector. For example, the mobile terminal may receive an action detection instruction submitted by a user, and then detect a current action of the user according to the action detection instruction. Optionally, after receiving the acceleration data and the angle data, the mobile terminal may perform normalization processing on the acceleration data and the angle data, obtain a first feature value according to the angle data obtained through the normalization processing, obtain a second feature value according to the acceleration data obtained through the normalization processing, and use the first feature value and the second feature value as input of a preset neural network algorithm to obtain a current action of the user.
S303, comparing the current action with the previous action.
After detecting the current action of the user, the mobile terminal may compare the current action with the pre-action, and when the current action matches the pre-action, further perform step S304; when the current action does not match the previous action, the mobile terminal may delete the detected current action. In the embodiment of the invention, the preposed actions performed when the user inputs the same control command are similar and stable, and the mobile terminal can compare the current action with the preposed actions, predict the action to be performed by the user, further execute the control command corresponding to the preposed action, simplify the operation steps of the user and improve the user experience.
In specific implementation, the mobile terminal can acquire the similarity between the current action and the preposed action, judge whether the similarity between the current action and the preposed action is greater than a preset threshold, and when the similarity between the current action and the preposed action is greater than the preset threshold, the mobile terminal can determine that the current action is matched with the preposed action; when the similarity between the current action and the previous action is less than or equal to a preset threshold, the mobile terminal may determine that the current action does not match the previous action. The preset threshold may be a preset proportional threshold, such as 85% or 90%.
and S304, executing a control command when the current action is matched with the previous action.
when the current action matches the previous action, the mobile terminal may execute the control instruction. For example, when the preamble is a preamble for controlling the mobile terminal to power on, the mobile terminal may enter a power-on state.
In the embodiment of the invention, the preposed action is determined in a plurality of actions performed when the user inputs the control command, the current action of the user is detected, the current action is compared with the preposed action, and the control command is executed when the current action is matched with the preposed action, so that the operation steps of the user can be simplified, and the control of the terminal can be conveniently realized.
Referring to fig. 5, fig. 5 is a schematic structural diagram of a terminal control device provided in an embodiment of the present invention, where the terminal control device in the embodiment of the present invention may include a tablet computer, a mobile phone, a personal computer, a notebook computer, a vehicle-mounted device, a network television, and other terminals, and as shown in the drawing, the terminal control device in the embodiment of the present invention may at least include a pre-action determining module 501, a current action detecting module 502, a comparing module 503, and a control instruction executing module 504, where:
The pre-action determining module 501 is configured to determine a pre-action from a plurality of actions performed when a user inputs a control instruction.
A current action detection module 502, configured to detect a current action of the user.
a comparing module 503, configured to compare the current action with the previous action.
and a control instruction execution module 504, configured to execute the control instruction when the current action matches the previous action.
In an alternative embodiment, the pre-action determining module 501 is specifically configured to:
And receiving an action detection instruction submitted by a user when the control instruction is input.
An action that is detected first in response to an action detection instruction is acquired among the plurality of actions.
The first detected motion is determined to be a preamble motion.
In an alternative embodiment, the pre-action determining module 501 is specifically configured to:
Acquiring a plurality of actions in the process of repeatedly inputting the control instruction by the user.
And respectively comparing the detected actions in the process of inputting the control command by the user each time, and acquiring the same detected actions in the process of inputting the control command by the user each time.
the action detected first in the same action is determined as the leading action.
In an optional embodiment, the terminal control apparatus in the embodiment of the present invention may further include:
The state information obtaining module 505 is configured to obtain state information of the user when the control instruction is input before the forward motion determining module 501 determines the forward motion in the multiple motions performed when the user inputs the control instruction, where the state information includes acceleration data and/or angle data.
The eigenvalue obtaining module 506 is configured to analyze the state information and obtain at least one eigenvalue.
The action obtaining module 507 is configured to use at least one feature value as an input of a preset neural network algorithm to obtain a plurality of actions performed by the user when the user inputs the control instruction.
Further optionally, the state information obtaining module 505 is specifically configured to:
And receiving state information detected by the wearable device when the user inputs a control instruction.
In an alternative embodiment, the current action detecting module 502 is specifically configured to:
receiving current state information of a user detected by wearing equipment;
And analyzing and processing the current state information to acquire the current action of the user.
In an optional embodiment, the terminal control apparatus in the embodiment of the present invention may further include:
And a control instruction sending module 508, configured to send the control instruction to the operation terminal corresponding to the control instruction when the current action matches the pre-action, so that the operation terminal executes the control instruction.
In the embodiment of the present invention, the pre-action determining module 501 determines a pre-action among a plurality of actions performed when a user inputs a control instruction, the current action detecting module 502 detects a current action of the user, the comparing module 503 compares the current action with the pre-action, and when the current action matches the pre-action, the control instruction executing module 504 executes the control instruction, which can simplify operation steps of the user and conveniently realize control of the terminal.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), or the like.
The above disclosure is only for the purpose of illustrating the preferred embodiments of the present invention, and it is therefore to be understood that the invention is not limited by the scope of the appended claims.

Claims (16)

1. a terminal control method, comprising:
Determining a pre-action among a plurality of actions performed when a user inputs a control instruction, wherein the pre-action is the action which is detected firstly among the plurality of actions performed when the user inputs the control instruction;
Detecting a current action of the user;
Comparing the current action with the lead action;
And when the current action is matched with the preposed action, executing the control instruction.
2. The method of claim 1, wherein determining the pre-action from the plurality of actions performed when the user inputs the control command comprises:
receiving an action detection instruction submitted by the user when the control instruction is input;
acquiring the action which is detected firstly when the action detection instruction is responded in the plurality of actions;
determining the first detected action as the pre-action.
3. The method of claim 1, wherein determining the pre-action from the plurality of actions performed when the user inputs the control command comprises:
acquiring a plurality of actions in the process of repeatedly inputting the control instruction by the user;
Respectively comparing each detected action in the process of inputting the control instruction by the user, and acquiring the same detected action in the process of inputting the control instruction by the user;
Determining a first detected motion of the same motions as the pre-motion.
4. the method according to any one of claims 1 to 3, wherein before determining the pre-action among the plurality of actions performed when the user inputs the control instruction, the method further comprises:
acquiring state information of the user when inputting a control instruction, wherein the state information comprises acceleration data and/or angle data;
Analyzing the state information to obtain at least one characteristic value;
And taking the at least one characteristic value as the input of a preset neural network algorithm to obtain a plurality of actions performed by the user when the control instruction is input.
5. The method of claim 4, wherein the obtaining the status information of the user when inputting the control instruction comprises:
Receiving state information detected by the wearable device when the user inputs the control instruction.
6. the method of claim 1, wherein the detecting the current action of the user comprises:
receiving current state information of the user detected by the wearable device;
and analyzing and processing the current state information to acquire the current action of the user.
7. the method of claim 1, wherein after comparing the current action with the pre-action, further comprising:
And when the current action is matched with the preposed action, sending the control instruction to an operation terminal corresponding to the control instruction so that the operation terminal executes the control instruction.
8. a terminal control apparatus, comprising:
The device comprises a preposed action determining module, a pre-action determining module and a pre-action determining module, wherein the preposed action determining module is used for determining a preposed action in a plurality of actions performed when a user inputs a control instruction, and the preposed action is the action detected firstly in the plurality of actions performed when the user inputs the control instruction;
A current action detection module for detecting the current action of the user;
a comparison module for comparing the current action with the pre-action;
And the control instruction execution module is used for executing the control instruction when the current action is matched with the preposed action.
9. The apparatus of claim 8, wherein the proactive action determining module is specifically configured to:
Receiving an action detection instruction submitted by the user when the control instruction is input;
acquiring the action which is detected firstly when the action detection instruction is responded in the plurality of actions;
Determining the first detected action as the pre-action.
10. The apparatus of claim 8, wherein the proactive action determining module is specifically configured to:
Acquiring a plurality of actions in the process of repeatedly inputting the control instruction by the user;
respectively comparing each detected action in the process of inputting the control instruction by the user, and acquiring the same detected action in the process of inputting the control instruction by the user;
Determining a first detected motion of the same motions as the pre-motion.
11. the apparatus according to any one of claims 8 to 10, further comprising:
a state information obtaining module, configured to obtain state information of the user when the control instruction is input before the pre-action determining module determines a pre-action among multiple actions performed when the user inputs the control instruction, where the state information includes acceleration data and/or angle data;
The characteristic value acquisition module is used for analyzing and processing the state information to acquire at least one characteristic value;
And the action acquisition module is used for taking the at least one characteristic value as the input of a preset neural network algorithm to obtain a plurality of actions performed by the user when the control instruction is input.
12. The apparatus of claim 11, wherein the status information obtaining module is specifically configured to:
receiving state information detected by the wearable device when the user inputs the control instruction.
13. The apparatus of claim 8, wherein the current action detection module is specifically configured to:
Receiving current state information of the user detected by the wearable device;
and analyzing and processing the current state information to acquire the current action of the user.
14. The apparatus of claim 8, further comprising:
and the control instruction sending module is used for sending the control instruction to an operation terminal corresponding to the control instruction when the current action is matched with the preposed action so as to enable the operation terminal to execute the control instruction.
15. The utility model provides a terminal control system which characterized in that, includes mobile terminal and wearing equipment, wherein:
The wearable device is used for detecting state information when a user inputs a control instruction and sending the detected state information to the mobile terminal, wherein the state information comprises acceleration data and/or angle data;
The mobile terminal is configured to analyze the state information, obtain at least one feature value, use the at least one feature value as an input of a preset neural network algorithm, obtain a plurality of actions performed by the user when the user inputs the control instruction, and determine a pre-action among the plurality of actions, where the pre-action is a first detected action among the plurality of actions performed by the user when the user inputs the control instruction;
The wearable device is also used for detecting the current state information of the user and sending the detected current state information to the mobile terminal;
The mobile terminal is further configured to analyze the current state information, obtain a current action of the user, compare the current action with the pre-action, and execute the control instruction when the current action is matched with the pre-action.
16. the system of claim 15, further comprising an operation terminal, wherein after the mobile terminal compares the current action with the previous action, the mobile terminal is further configured to:
When the current action is matched with the preposed action, the control instruction is sent to an operation terminal corresponding to the control instruction;
And the operation terminal is used for executing the control instruction.
CN201510730602.7A 2015-11-02 2015-11-02 Terminal control method and device Active CN106648040B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510730602.7A CN106648040B (en) 2015-11-02 2015-11-02 Terminal control method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510730602.7A CN106648040B (en) 2015-11-02 2015-11-02 Terminal control method and device

Publications (2)

Publication Number Publication Date
CN106648040A CN106648040A (en) 2017-05-10
CN106648040B true CN106648040B (en) 2019-12-13

Family

ID=58809506

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510730602.7A Active CN106648040B (en) 2015-11-02 2015-11-02 Terminal control method and device

Country Status (1)

Country Link
CN (1) CN106648040B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108762489B (en) * 2018-05-07 2021-09-21 武汉灏存科技有限公司 Control method based on data glove, system and storage medium
CN108717271A (en) * 2018-05-30 2018-10-30 辽东学院 Man-machine interaction control method, device, system and readable storage medium storing program for executing
CN109766797A (en) * 2018-12-27 2019-05-17 秒针信息技术有限公司 The detection method and device of the access entitlements of scene

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103295028A (en) * 2013-05-21 2013-09-11 深圳Tcl新技术有限公司 Gesture operation control method, gesture operation control device and intelligent display terminal
CN103902036A (en) * 2012-12-29 2014-07-02 鸿富锦精密工业(深圳)有限公司 Electronic device and a method for controlling electronic device through gestures
CN104407702A (en) * 2014-11-26 2015-03-11 三星电子(中国)研发中心 Method, device and system for performing actions based on context awareness

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104898473A (en) * 2015-04-01 2015-09-09 小米科技有限责任公司 Method of handling terminal equipment and device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103902036A (en) * 2012-12-29 2014-07-02 鸿富锦精密工业(深圳)有限公司 Electronic device and a method for controlling electronic device through gestures
CN103295028A (en) * 2013-05-21 2013-09-11 深圳Tcl新技术有限公司 Gesture operation control method, gesture operation control device and intelligent display terminal
CN104407702A (en) * 2014-11-26 2015-03-11 三星电子(中国)研发中心 Method, device and system for performing actions based on context awareness

Also Published As

Publication number Publication date
CN106648040A (en) 2017-05-10

Similar Documents

Publication Publication Date Title
KR102181588B1 (en) Method and apparatus for optimal control based on motion-voice multi-modal command
US20180011456A1 (en) Method and apparatus for smart home control based on smart watches
CN107908334B (en) Fingerprint icon display method and device and mobile terminal
CN107422859B (en) Gesture-based regulation and control method and device, computer-readable storage medium and air conditioner
CN106225174B (en) Air conditioner control method and system and air conditioner
CN108595231B (en) Application program preloading method and device, storage medium and intelligent terminal
JP6475380B2 (en) Terminal and terminal control method
CN108089891B (en) Application program starting method and mobile terminal
CN109901698B (en) Intelligent interaction method, wearable device, terminal and system
CN106648040B (en) Terminal control method and device
CN108055405B (en) Terminal and method for awakening same
CN110286744B (en) Information processing method and device, electronic equipment and computer readable storage medium
CN108958593B (en) Method for determining communication object and mobile terminal
WO2018166204A1 (en) Method for controlling fingerprint recognition module, and mobile terminal and storage medium
CN109933191A (en) Gesture identification and control method and its system
CN106681504B (en) Terminal control method and device
Ameliasari et al. An evaluation of svm in hand gesture detection using imu-based smartwatches for smart lighting control
CN113671846A (en) Intelligent device control method and device, wearable device and storage medium
CN108076223B (en) Task switching method and device, terminal equipment and storage medium
CN107015879A (en) Unlocked by fingerprint method and device
CN109933196B (en) Screen control method and device and terminal equipment
CN108954701B (en) Air conditioner, control method and device thereof and readable storage medium
WO2019149123A1 (en) Control execution method, device, storage medium and electronic device
WO2016197714A1 (en) Method for automatically identifying operation mode, and terminal
CN108089935B (en) Application program management method and mobile terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant