CN110573999A - Terminal device control method, terminal device, and computer-readable medium - Google Patents
Terminal device control method, terminal device, and computer-readable medium Download PDFInfo
- Publication number
- CN110573999A CN110573999A CN201780090095.2A CN201780090095A CN110573999A CN 110573999 A CN110573999 A CN 110573999A CN 201780090095 A CN201780090095 A CN 201780090095A CN 110573999 A CN110573999 A CN 110573999A
- Authority
- CN
- China
- Prior art keywords
- terminal device
- gesture
- terminal equipment
- state
- dimensional
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/033—Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
A control method of a terminal device, the terminal device and a computer-readable storage medium, the method comprising: detecting a stereoscopic gesture, wherein the stereoscopic gesture is related to a motion track of a terminal device in a stereoscopic space (101); detecting a state of the terminal device (102); and controlling the terminal equipment to execute corresponding operation (103) according to the detected three-dimensional gesture and state of the terminal equipment, so that the control operation can be quickly realized, and the operation efficiency is high.
Description
The present invention relates to the field of electronic technologies, and in particular, to a control method for a terminal device, and a computer-readable medium.
With the rapid development of network technology and multimedia technology, terminal devices such as mobile phones and the like are widely used. Terminal devices have more and more functions, such as a photographing function, a flashlight function, a screen capturing function, a video recording function, and the like.
Currently, a general way for people to use a certain function of a terminal device is to start the function through an application icon corresponding to the function. In this way, people often need to first light up the screen, then unlock the screen, and finally search for and click on the corresponding icon. For example, when a user wants to take a picture of the terminal device, the user needs to perform the following operations in sequence: illuminating the screen, unlocking the screen, finding the camera application, starting the camera application. This method is complicated and takes a long time. In addition, people can also realize a certain function of the terminal equipment by simultaneously pressing two keys on the terminal equipment. For example, the user may press the volume down key and the power key at the same time, and hold them for several seconds, to complete the screen capture operation. Since the user is required to press two keys at the same time, the user is likely to need to press several times repeatedly to complete the screen capturing operation. It can be seen that the probability of operation failure is high and it takes a long time to operate in this way.
The technical solution has the disadvantages of complex operation and long time consumption.
Disclosure of Invention
The embodiment of the invention provides a control method of terminal equipment, the terminal equipment and a computer readable medium, which are simple to operate and high in accuracy rate and can meet the personalized requirements of users.
In a first aspect, an embodiment of the present invention provides a method for controlling a terminal device, where the method includes:
detecting a three-dimensional gesture, wherein the three-dimensional gesture is related to a motion track of the terminal device in a three-dimensional space;
detecting the state of the terminal equipment;
and controlling the terminal equipment to execute corresponding operation according to the detected three-dimensional gesture and state of the terminal equipment.
The three-dimensional gesture refers to an operation of moving the terminal device or an operation of changing the posture of the terminal device by holding the terminal device by hand. The terminal device can detect the three-dimensional gesture by utilizing a gravity sensor, a displacement sensor, a gyroscope, an attitude sensor and the like. The terminal device can be configured with a corresponding relation between the stereoscopic gesture and an operation executed by the terminal device; and determining the operation required to be executed by the terminal equipment according to the corresponding relation and the stereo gesture.
Optionally, before controlling the terminal device to execute the corresponding operation, a prompt interface is displayed, and prompt information in the prompt interface prompts whether to execute the operation; and if the instruction for refusing to execute the operation is not detected within the preset time, executing the operation. The prompt interface may include a confirmation option and a rejection option; the terminal equipment executes the operation after detecting the click operation on the confirmation option; and after detecting the clicking operation on the rejection option, the terminal equipment rejects to execute the operation.
In the embodiment of the invention, the terminal equipment determines the operation required to be executed by the terminal equipment according to the detected three-dimensional gesture and the state of the terminal equipment; the control operation can be realized quickly, and the operation efficiency is high.
In an optional implementation manner, the state of the terminal device includes a first application scenario or a second application scenario;
the controlling the terminal device to execute corresponding operations according to the detected three-dimensional gesture and state of the terminal device comprises:
controlling the terminal equipment to execute a first operation according to the detected three-dimensional gesture and the first application scene;
or
Controlling the terminal equipment to execute a second operation according to the detected stereoscopic gesture and the second application scene;
the first operation is different from the second operation.
The state of the terminal device may be an operating state of the terminal device, such as a screen-off state, a screen-on state, a video playing state, a game operating state, a music playing state, and the like; the state of the environment in which the terminal device is located may also be, for example, the illuminance of the surrounding environment is lower than a certain illuminance, the noise of the surrounding environment is lower than a certain degree, and the like; but also the time period of the current time. The video play state is a state in which a video is played. The game execution state is a state in which the game is executed. The music play state is a state in which music is played. The first application scenario and the second application scenario are different application scenarios. For example, the first application scene is in a screen-down state, and the second application scene is in a screen-up state. For another example, the first application scenario is a scenario in which the ambient environment has an illuminance lower than a certain illuminance, and the second application scenario is a scenario in which the ambient environment has an illuminance lower than the illuminance. It can be understood that the same stereo gesture can be performed by different terminal devices in different application scenarios.
In the embodiment of the invention, the terminal equipment determines the operation corresponding to the detected three-dimensional gesture according to the state of the terminal equipment, and the implementation is simple.
In an optional implementation manner, the state of the terminal device includes a first application scenario or a second application scenario;
the controlling the terminal device to execute corresponding operations according to the detected three-dimensional gesture and state of the terminal device comprises:
controlling the terminal equipment to execute a first operation according to the detected three-dimensional gesture and the first application scene;
or
Controlling the terminal equipment to execute a second operation according to the detected stereoscopic gesture and the second application scene;
the first operation is the same as the second operation.
The first application scenario and the second application scenario correspond to the same operation. In the embodiment of the invention, under different application scenes, the terminal equipment can execute the same operation after detecting the same three-dimensional gesture. For example, in some application scenarios, including video call, playing a mobile phone game, watching a video, and the like, the terminal device executes the operation of capturing the screen after detecting the stereo gesture of shaking the mobile phone left and right. That is, the terminal device can perform a screen capture operation in a plurality of application scenarios. It can be understood that the user completes the stereoscopic gesture in two or more application scenes, and the terminal device performs the same operation.
In the embodiment of the invention, the terminal equipment executes the same operation after detecting the three-dimensional gesture operation in a plurality of application scenes, so that the requirements of a user on executing the operation corresponding to the three-dimensional gesture in different scenes can be met, and the user experience is improved.
In an optional implementation manner, before controlling the terminal device to perform corresponding operations according to the detected stereoscopic gesture and state of the terminal device, the method further includes:
and the terminal equipment detects and stores the corresponding relation between the three-dimensional gesture set by the user and the operation executed by the terminal equipment.
In the embodiment of the invention, the user can set the corresponding relation between the three-dimensional gesture and the operation executed by the terminal equipment, the operation is simple, and the personalized requirements of the user can be met.
In an optional implementation manner, the state of the terminal device includes being in a first time period or a second time period;
the control of the terminal equipment to execute corresponding operations comprises;
under the condition that the terminal equipment is in the first time period, controlling the terminal equipment to execute a first operation;
or
Under the condition that the terminal equipment is in the second time period, controlling the terminal equipment to execute a second operation;
the first operation is different from the second operation.
The first and second time periods may be time periods that do not overlap in time. For example, the first time period is 8:00 to 10:00, and the second time period is 11:00 to 18: 00. The stereoscopic gesture can correspond to two or more operations, and the terminal device can execute different operations after detecting the stereoscopic gesture in different time periods.
In the embodiment of the invention, under the condition that the three-dimensional gesture corresponds to at least two operations, the terminal equipment determines the operation to be executed according to the time period, and can realize a plurality of different functions through one three-dimensional gesture, so that the operation is simple.
In an optional implementation manner, the controlling, according to the detected stereoscopic gesture and state of the terminal device, the terminal device to perform corresponding operations includes:
obtaining a target character corresponding to the motion track of the terminal equipment;
obtaining the operation corresponding to the target character according to the state of the terminal equipment;
and controlling the terminal equipment to execute the operation corresponding to the target character.
The target character may be various characters such as "C", "O", "L", "+". The operation corresponding to the target character can be an interface displayed by the terminal equipment; or adjusting the screen brightness of the terminal equipment; the volume of the terminal equipment can be adjusted; the song starting from the target character or the song of the artist starting from the target character in the music play list of the terminal equipment can also be played; the target application on the terminal equipment can be started or closed; other operations are also possible, and the embodiments of the present invention are not limited.
In the embodiment of the invention, the terminal equipment determines the operation to be executed according to the motion track of the terminal equipment, and the operation is simple and the accuracy is high; various operations can be performed quickly.
In an optional implementation manner, the stereoscopic gesture is an action that the amplitude of shaking the terminal device exceeds a first threshold and the frequency of shaking the terminal device exceeds a second threshold; or the three-dimensional gesture is an action of turning over the terminal equipment; the controlling the terminal device to execute corresponding operations comprises:
intercepting an interface displayed by the terminal equipment;
or adjusting the screen brightness of the terminal equipment;
or starting or closing the target application on the terminal equipment.
It is understood that the above-mentioned stereoscopic gesture is an action of shaking the terminal device. In actual use, the terminal device is often shaken, and the stereoscopic gesture is limited by the first threshold and the second threshold, so that a shaking motion which is triggered by mistake can be detected. The action of turning over the terminal device may be an action of turning over the terminal device by an angle exceeding a target angle.
In the embodiment of the invention, a certain function can be quickly realized by turning or shaking the terminal equipment, and the operation is simple.
In an optional implementation manner, the controlling, according to the detected stereoscopic gesture and state of the terminal device, the terminal device to perform corresponding operations includes:
when the terminal device detects a first three-dimensional gesture and is in a bright screen state, controlling the terminal device to execute screen capturing operation;
or controlling the terminal equipment to execute volume adjustment operation when the terminal equipment detects the second three-dimensional gesture and is in a music playing state;
or controlling the terminal equipment to execute brightness adjustment operation when the terminal equipment detects the third three-dimensional gesture and is in a video playing state;
or controlling the terminal device to start a flashlight function under the condition that the terminal device detects the fourth stereo gesture and the illumination of the environment is smaller than the first illumination;
or controlling the terminal device to start the photographing function under the condition that the terminal device detects the fifth three-dimensional gesture and the illumination of the environment where the terminal device is located is greater than the second illumination.
In the embodiment of the invention, the terminal equipment determines the operation required to be executed by the terminal equipment according to the detected three-dimensional gesture and the state of the terminal equipment; the control operation can be realized quickly, and the operation efficiency is high.
In an optional implementation manner, the first operation is to start a first application, and the second operation is to start a second application; the first state is a screen-on state, and the second state is a screen-off state;
or the first operation is screen capture operation, the second operation is brightness adjustment operation, the first state is game interface display, and the second state is video interface display.
In the embodiment of the invention, the terminal devices are in different states, the user can complete different operations executed by the same three-dimensional gesture terminal device, different functions can be realized through one three-dimensional gesture, and the operation efficiency can be improved.
In an optional implementation manner, before controlling the terminal device to perform a corresponding operation according to the detected stereoscopic gesture and state of the terminal device, the method further includes:
after receiving a three-dimensional gesture collection instruction, collecting training data, wherein the training data are N pieces of action data corresponding to N reference three-dimensional gestures, and the N reference three-dimensional gestures correspond to the three-dimensional gestures;
training the training data by adopting a neural network algorithm to obtain a recognition model corresponding to the three-dimensional gesture;
receiving a setting instruction, and setting the operation corresponding to the three-dimensional gesture according to the setting instruction;
the detecting a stereoscopic gesture includes:
and determining the stereoscopic gesture according to the recognition model.
Optionally, the terminal device may train the training data by using a deep learning algorithm, a machine learning algorithm, and the like, to obtain a recognition model corresponding to the three-dimensional gesture.
In the embodiment of the invention, terminal equipment acquires training data and determines a three-dimensional gesture and a recognition model corresponding to the training data; on one hand, a recognition model corresponding to the stereoscopic gesture can be quickly established; on the other hand, the user can quickly set the stereo gesture and the operation corresponding to the stereo gesture, and the operation is simple.
In an optional implementation manner, after the controlling the terminal device to perform the corresponding operation, the method further includes:
and updating the recognition model by utilizing the action data corresponding to the stereo gesture.
The motion data corresponding to the three-dimensional gesture can be combined with the existing training data to obtain new training data. And the terminal equipment trains the new training data to obtain a new recognition model, namely the updated recognition model.
In the embodiment of the invention, the terminal equipment updates the recognition model according to the action data corresponding to the three-dimensional gesture, so that the recognition model can be optimized, and the probability of correctly recognizing the three-dimensional gesture by the recognition model is improved.
In a second aspect, an embodiment of the present invention provides a terminal device, including:
the first detection unit is used for detecting a three-dimensional gesture, and the three-dimensional gesture is related to a motion track of the terminal equipment in a three-dimensional space;
the second detection unit is used for detecting the state of the terminal equipment;
and the control unit is used for controlling the terminal equipment to execute corresponding operation according to the detected three-dimensional gesture and state of the terminal equipment.
In the embodiment of the invention, the terminal equipment determines the operation required to be executed by the terminal equipment according to the detected three-dimensional gesture and the state of the terminal equipment; the control operation can be realized quickly, and the operation efficiency is high.
In an optional implementation manner, the state of the terminal device includes a first application scenario or a second application scenario;
the control unit is specifically configured to control the terminal device to execute a first operation according to the detected stereoscopic gesture and the first application scene; or specifically, the terminal device is controlled to execute a second operation according to the detected stereoscopic gesture and the second application scene; the first operation is different from the second operation.
In the embodiment of the invention, the terminal equipment determines the operation corresponding to the detected three-dimensional gesture according to the state of the terminal equipment; the problem of false triggering of the three-dimensional gesture can be solved, and the method is simple to implement.
In an optional implementation manner, the state of the terminal device includes a first application scenario or a second application scenario;
the control unit is specifically configured to control the terminal device to execute a first operation according to the detected stereoscopic gesture and the first application scene; or specifically, the terminal device is controlled to execute a second operation according to the detected stereoscopic gesture and the second application scene; the first operation is the same as the second operation.
In the embodiment of the invention, the terminal device detects that the three-dimensional gesture operation executes the same operation in a plurality of application scenes, so that the requirements of a user on executing the operation corresponding to the three-dimensional gesture in different scenes can be met, and the user experience is improved.
In an optional implementation manner, the first detection unit is further configured to detect a stereo gesture set by a user; the terminal device further includes:
and the storage unit is used for storing the corresponding relation between the three-dimensional gesture set by the user and the operation executed by the terminal equipment.
In the embodiment of the invention, the user can set the corresponding relation between the three-dimensional gesture and the operation executed by the terminal equipment, the operation is simple, and the personalized requirements of the user can be met.
In an optional implementation manner, the state of the terminal device includes being in a first time period or a second time period;
the control unit is specifically configured to control the terminal device to execute a first operation when the terminal device is in the first time period; or specifically, the terminal device is controlled to execute a second operation when the terminal device is in the second time period; the first operation is different from the second operation.
In the embodiment of the invention, under the condition that the three-dimensional gesture corresponds to at least two operations, the terminal equipment determines the operation to be executed according to the time period, and can realize a plurality of different functions through one three-dimensional gesture, so that the operation is simple.
In an optional implementation manner, the first detection unit is specifically configured to obtain a target character corresponding to a motion trajectory of the terminal device; obtaining the operation corresponding to the target character according to the state of the terminal equipment;
the control unit is specifically configured to control the terminal device to execute an operation corresponding to the target character.
In the embodiment of the invention, the terminal equipment determines the operation to be executed according to the motion track of the terminal equipment, and the operation is simple and the accuracy is high; various operations can be performed quickly.
In an optional implementation manner, the stereoscopic gesture is an action that the amplitude of shaking the terminal device exceeds a first threshold and the frequency of shaking the terminal device exceeds a second threshold; or the three-dimensional gesture is an action of turning over the terminal equipment;
the control unit is specifically used for intercepting an interface displayed by the terminal equipment;
or, specifically, the method is used for adjusting the screen brightness of the terminal device;
or, specifically, the method is used to start or close the target application on the terminal device.
In the embodiment of the invention, a certain function can be quickly realized by turning or shaking the terminal equipment, and the operation is simple.
In an optional implementation manner, the control unit is specifically configured to control the terminal device to perform a screen capture operation when the terminal device detects a first stereo gesture and is in a bright screen state;
or, the control unit is specifically configured to control the terminal device to execute a volume adjustment operation when the terminal device detects the second stereo gesture and is in a music playing state;
or, the control unit is specifically configured to control the terminal device to execute a brightness adjustment operation when the terminal device detects the third stereo gesture and is in a video playing state;
or, specifically, the method is used for controlling the terminal device to start the flashlight function when the terminal device detects the fourth stereo gesture and the illuminance of the environment is smaller than the first illuminance;
or, the control unit is specifically configured to control the terminal device to start the photographing function when the terminal device detects the fifth stereo gesture and the illuminance of the environment where the terminal device is located is greater than the second illuminance.
In the embodiment of the invention, the terminal equipment determines the operation required to be executed by the terminal equipment according to the detected three-dimensional gesture and the state of the terminal equipment; the control operation can be realized quickly, and the operation efficiency is high.
In an optional implementation manner, the first operation is to start a first application, and the second operation is to start a second application; the first application scene is in a bright screen state, and the second application scene is in a screen-off state;
or the first operation is screen capture operation, and the second operation is brightness adjustment operation; the first application scene is a game interface, and the second application scene is a video interface.
In the embodiment of the invention, the terminal devices are in different states, the user can complete different operations executed by the same three-dimensional gesture terminal device, different functions can be realized through one three-dimensional gesture, and the operation efficiency can be improved.
In an optional implementation manner, the terminal device further includes:
the receiving unit is used for receiving a three-dimensional gesture acquisition instruction;
the system comprises an acquisition unit, a display unit and a control unit, wherein the acquisition unit is used for acquiring training data, the training data are N pieces of action data corresponding to N reference three-dimensional gestures, and the N reference three-dimensional gestures correspond to the three-dimensional gestures;
the receiving unit is further used for receiving a setting instruction and setting the operation corresponding to the three-dimensional gesture according to the setting instruction;
the first detection unit is specifically configured to determine the stereoscopic gesture according to the recognition model.
In the embodiment of the invention, terminal equipment acquires training data and determines a three-dimensional gesture and a recognition model corresponding to the training data; on one hand, a recognition model corresponding to the stereoscopic gesture can be quickly established; on the other hand, the user can quickly set the stereo gesture and the operation corresponding to the stereo gesture, and the operation is simple.
In an optional implementation manner, the terminal device further includes:
and the updating unit is used for updating the recognition model by utilizing the action data corresponding to the three-dimensional gesture.
In the embodiment of the invention, the terminal equipment updates the recognition model according to the action data corresponding to the three-dimensional gesture, so that the recognition model can be optimized, and the probability of correctly recognizing the three-dimensional gesture by the recognition model is improved.
In a third aspect, an embodiment of the present invention provides another terminal device, including a processor and a memory, where the processor and the memory are connected to each other, where the memory is used to store a computer program, and the computer program includes program instructions, and the processor is configured to call the program instructions to execute the method of the first aspect.
In a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium, in which a computer program is stored, the computer program comprising program instructions, which, when executed by a processor, cause the processor to perform the method of the first aspect.
Fig. 1 is a schematic flowchart of a control method of a terminal device according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of an embodiment of the invention for drawing an O-dimensional gesture;
FIG. 3 is a schematic diagram of a drawing of a C-dimensional gesture according to an embodiment of the present invention;
fig. 4 is a schematic diagram of a stereoscopic gesture for shaking a terminal device according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of a three-dimensional gesture setting interface according to an embodiment of the present invention;
fig. 6 is a schematic diagram of a stereoscopic gesture adding interface according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of a human-computer interaction provided by an embodiment of the present invention;
FIG. 8 is a schematic diagram of a setup drawing C stereo gesture according to an embodiment of the present invention;
FIG. 9 is a schematic diagram of a setup drawing C stereo gesture according to another embodiment of the present invention;
fig. 10 is a schematic view of an operation of setting a drawing C stereo gesture in different scenes according to an embodiment of the present invention;
fig. 11 is a schematic view of operations of a setup drawing C stereo gesture provided in the embodiment of the present invention at different time periods;
FIG. 12 is a diagram illustrating an interface for recognition results according to an embodiment of the present invention;
FIG. 13 is a schematic flow chart of a screen capture method according to an embodiment of the present invention;
fig. 14 is a schematic structural diagram of a terminal device according to an embodiment of the present invention;
fig. 15 is a schematic structural diagram of a terminal device according to another embodiment of the present invention.
The terminology used in the description of the embodiments of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention.
The terminal device in the application can be a mobile phone, a tablet computer, a wearable device, a personal digital assistant and the like. The terminal equipment can detect acceleration, angular velocity, motion direction and the like through the sensor to obtain motion state data, namely motion data; and determining the motion trail, the posture change and the like of the terminal equipment according to the obtained motion state data. The motion state data may be position data of the terminal device at different time points. For example, table 1 shows motion data detected by a stereo gesture for drawing O collected by a terminal device, where 3 data in each row in table 1 represents a spatial position of the terminal device at a time point, that is, a coordinate point corresponding to a rectangular spatial coordinate system. It can be understood that the motion data corresponding to different stereo gestures are different, and various set or preset stereo gestures can be recognized by analyzing the detected motion data.
TABLE 1
X | Y | Z |
-0.11256 | 0.033091 | 0.002339 |
-0.115 | -0.06707 | 0.009669 |
-3.06304 | 0.009879 | 0.263789 |
-2.57556 | 0.521784 | -0.06606 |
-0.2164 | 0.012322 | -0.11615 |
-0.00749 | -0.02431 | 0.012113 |
-0.13333 | -0.01087 | -0.3605 |
-0.05025 | -0.27721 | -1.5358 |
0.008378 | -0.23323 | -1.95608 |
0.101229 | 0.119834 | -0.45335 |
0.054803 | 0.0392 | -0.02452 |
0.051138 | 0.044087 | -0.05873 |
The main inventive principles of the present application may include: the user presets a three-dimensional gesture required by the user; the terminal equipment establishes an identification model of the three-dimensional gesture according to the collected action data; and executing the operation corresponding to the stereoscopic gesture under the condition that the stereoscopic gesture is detected. Therefore, the user can set exclusive three-dimensional gestures according to own needs, set the operation corresponding to each three-dimensional gesture and meet the requirements of different users. That is to say, the user can set some stereo gestures, and corresponding control operation is realized rapidly by completing the stereo gestures, so that the operation is simple. It can be understood that the operation habits of each user are different, different users hold the terminal equipment to complete the same gesture action, and the similarity of the action data collected by the terminal equipment is low; one user holds the terminal equipment by hand to complete the same gesture action, and the similarity of action data collected by the terminal equipment is higher. Therefore, the identification model can be established according to the collected action data, and the three-dimensional gesture can be accurately identified by using the identification model.
Referring to fig. 1, fig. 1 is a schematic flowchart of a control method of a terminal device according to an embodiment of the present invention. As shown in fig. 1, the method includes:
101. detecting a three-dimensional gesture, wherein the three-dimensional gesture is related to a motion track of the terminal equipment in a three-dimensional space;
the three-dimensional gesture refers to an operation of moving the terminal device by holding the terminal device by hand or an operation of changing the posture of the terminal device. As shown in fig. 2, the user holds the terminal device to perform an action with a motion trajectory of O, i.e., draw an O-dimensional gesture. As shown in fig. 3, the user holds the terminal device and makes an action with a motion trajectory of C, that is, draws a C-dimensional gesture. As shown in fig. 4, the user shakes the terminal device, i.e., shakes the terminal device in a stereoscopic gesture. The terminal equipment can detect the acceleration, the angular velocity, the motion direction and the like of the terminal equipment through sensors such as a gravity sensor, a gyroscope and the like to obtain motion state data; the stereoscopic gesture can be determined according to the obtained motion state data. It can be understood that the terminal device can detect various preset or set stereoscopic gestures of the terminal device. Such as the drawing O stereoscopic gesture, the drawing C stereoscopic gesture, and the panning stereoscopic gesture of shaking the terminal device shown in fig. 2 to 4. Table 1 shows motion data of the O-frame gesture in the three-dimensional space, and the O-frame gesture can be recognized by the data terminal device in table 1.
The terminal device may be preset with a recognition model corresponding to the three-dimensional gesture, and the three-dimensional gesture may be recognized by using the recognition model. For example, the terminal device is preset with a first stereo gesture and a recognition model corresponding to the first stereo gesture, and the terminal device inputs the collected motion data corresponding to the first stereo gesture into the recognition model to obtain the first stereo gesture. For another example, the terminal device is preset with 5 stereo gestures and recognition models corresponding to the 5 stereo gestures, and the terminal device inputs the acquired motion data into the recognition models to obtain the stereo gestures corresponding to the motion data. The user can set the stereoscopic gesture required by the user and the operation corresponding to each stereoscopic gesture, and the specific setting process is described in detail in the following embodiments. The process of detecting the three-dimensional gesture by the terminal device in the embodiment of the present invention may include:
1) acquiring action data by the terminal equipment;
the terminal equipment acquires action data through the target sensor. The target sensor may be at least one of a gyroscope, a gravity sensor, and the like. For example, the terminal device collects 100 sets of motion data per second using a gyroscope, as shown in table 1, with each row representing one set of data.
2) Inputting the motion data into a recognition model;
specifically, the network device transmits the motion data to a processor, and the processor recognizes a tag corresponding to the motion data through the recognition model. The identification model is established by the terminal device according to the collected training data. The following embodiments will describe the method of creating the recognition model in detail.
3) Determining a label corresponding to the action data by using the identification model;
in the terminal device, each preset or set stereo gesture corresponds to a different tag. The terminal device can be preset with some disordered operation data, and each disordered operation corresponds to the same label.
4) And determining the three-dimensional gesture corresponding to the label.
The terminal equipment can quickly and accurately detect the three-dimensional gesture by executing the steps.
102. Detecting the state of the terminal equipment;
the state of the terminal device may be an operating state of the terminal device, such as a screen-off state, a screen-on state, a video playing state, a game operating state, a music playing state, and the like; the state of the environment in which the terminal device is located may be, for example, the illuminance of the surrounding environment is lower than a certain illuminance, the noise of the surrounding environment is lower than a certain degree, or the like; but also the time period in which the current time is. The video play state is a state in which a video is played. The game execution state is a state in which the game is executed. The music play state is a state in which music is played. The detecting the state of the terminal device may be detecting an application scene of the terminal device, such as a game running scene, a video playing scene, and the like.
103. And controlling the terminal equipment to execute corresponding operation according to the detected three-dimensional gesture and state of the terminal equipment.
The terminal device may preset a corresponding relationship between the stereoscopic gesture and an operation that the terminal device detects the stereoscopic gesture and needs to be executed; and obtaining the operation corresponding to the three-dimensional gesture according to the corresponding relation. For example, the terminal device is preset with a target correspondence table, where the target correspondence table includes first to sixth operations corresponding in sequence to first to sixth stereo gestures; and the terminal equipment executes a second operation after determining a second three-dimensional gesture according to the collected action data. In the embodiment of the present invention, the terminal device may adjust the operation corresponding to the stereo gesture, or may delete the stereo gesture. For example, a certain stereo gesture corresponds to a first operation, and the user may adjust the operation corresponding to the stereo gesture to a second operation.
The terminal device detects the same stereoscopic gesture in different states, and the operations to be executed may be different.
In an optional implementation manner, the state of the terminal device includes a first application scenario or a second application scenario;
the controlling the terminal device to execute corresponding operations according to the detected three-dimensional gesture and state of the terminal device includes:
controlling the terminal equipment to execute a first operation according to the detected three-dimensional gesture and the first application scene;
or
Controlling the terminal equipment to execute a second operation according to the detected three-dimensional gesture and the second application scene;
the first operation is different from the second operation.
The first application scenario and the second application scenario are different application scenarios. For example, the first application scene is in a screen-off state, and the second application scene is in a screen-on state. It can be understood that the same stereo gesture can be performed by different terminal devices in different application scenarios. For example, in an application scene of a running game, the terminal device detects a three-dimensional gesture for shaking the terminal device, and intercepts a screen; and the terminal equipment detects the three-dimensional gesture of shaking the terminal equipment and adjusts the screen brightness in the application scene of playing the video. Optionally, the terminal device detects a three-dimensional gesture of shaking the terminal device left and right, and the screen brightness is increased; and the terminal equipment detects the three-dimensional gesture of shaking the terminal equipment up and down, and the screen brightness is reduced. For another example, when the terminal device is in a screen-saving state, a three-dimensional gesture with a motion trajectory of "O" is detected, and a flashlight application is started; the terminal device detects a three-dimensional gesture with a motion track of O under a bright screen state, and starts a camera application.
In the embodiment of the invention, the terminal equipment determines the operation corresponding to the detected three-dimensional gesture according to the state of the terminal equipment, and the implementation is simple.
In an optional implementation manner, the state of the terminal device includes a first application scenario or a second application scenario;
the controlling the terminal device to execute corresponding operations according to the detected three-dimensional gesture and state of the terminal device includes:
controlling the terminal equipment to execute a first operation according to the detected three-dimensional gesture and the first application scene;
or
Controlling the terminal equipment to execute a second operation according to the detected three-dimensional gesture and the second application scene;
the first operation is the same as the second operation.
In the embodiment of the invention, under different application scenes, the terminal equipment can execute the same operation after detecting the same three-dimensional gesture. For example, in any application scene, after the terminal device detects the stereoscopic gesture with the motion trajectory of J, the memo application is started. For another example, the terminal device intercepts the screen after detecting the stereoscopic gesture of shaking the terminal device in a video call scene, a game running scene, a video playing scene, and the like. It can be understood that the user completes a certain stereoscopic gesture in two or more application scenes, and the terminal device performs the same operation.
In the embodiment of the invention, the terminal equipment executes the same operation after detecting the three-dimensional gesture operation in a plurality of application scenes, so that the requirements of a user on executing the operation corresponding to the three-dimensional gesture in different scenes can be met, and the user experience is improved.
In an optional implementation manner, the state of the terminal device includes being in a first time period or a second time period;
the controlling the terminal equipment to execute corresponding operations according to the detected three-dimensional gestures and states of the terminal equipment;
under the condition that the terminal equipment is in the first time period, controlling the terminal equipment to execute a first operation;
or
Under the condition that the terminal equipment is in the second time period, controlling the terminal equipment to execute a second operation;
the first operation is different from the second operation.
The first time period and the second time period may be time periods that do not overlap in time. For example, the first time period is 8:00 to 10:00, and the second time period is a time period other than 8:00 to 10:00 per day. The stereoscopic gestures may correspond to different operations at different time periods. Specifically, the stereoscopic gesture corresponds to the first operation in the first time period; the second operation is performed in the second period. In practical application, a user may set each time period corresponding to each stereoscopic gesture and an operation corresponding to each time period.
In the embodiment of the invention, under the condition that one stereo gesture corresponds to at least two operations, the terminal equipment determines the operation to be executed according to the time period, and can realize a plurality of different functions through one stereo gesture, so that the operation is simple.
In the embodiment of the invention, the terminal equipment determines the operation required to be executed by the terminal equipment according to the detected three-dimensional gesture and the state of the terminal equipment; the control operation can be realized quickly, and the operation efficiency is high.
In an optional implementation manner, before controlling the terminal device to perform a corresponding operation according to the detected stereoscopic gesture and state of the terminal device, the method further includes:
and the terminal equipment detects and stores the corresponding relation between the three-dimensional gesture set by the user and the operation executed by the terminal equipment.
In the embodiment of the invention, the user can set the corresponding relation between the three-dimensional gesture and the operation executed by the terminal equipment, the operation is simple, and the individual requirements of different users are met.
Before executing 101 in fig. 1, the user may set a desired stereo gesture, and the following provides a specific example of setting the stereo gesture, which may include:
1) starting a starting interface for setting the three-dimensional gesture in the interface;
the setting interface is an interface corresponding to the setting icon on the desktop of the terminal equipment. In practical application, after a user clicks the setting icon, the terminal device displays a setting interface. And after the user clicks the starting interface corresponding to the three-dimensional gesture in the setting interface, the terminal equipment displays the three-dimensional gesture setting interface.
2) The terminal equipment displays a three-dimensional gesture setting interface;
FIG. 5 is a diagram illustrating an exemplary stereoscopic gesture setting interface. As shown in fig. 5, the stereoscopic gesture setting interface may include a stereoscopic gesture switch control 501, an O-drawing stereoscopic gesture switch control 502, and an add stereoscopic gesture interface 503, where the stereoscopic gesture switch control 501 is configured to receive an operation of turning on or off a stereoscopic gesture input by a user, the O-drawing stereoscopic gesture switch control 502 is configured to receive an operation of turning on or off an O-drawing stereoscopic gesture input by a user, and the add stereoscopic gesture interface 503 is configured to receive an operation of adding a stereoscopic gesture input by a user. The buttons in the stereo gesture switch control 501 are buttons for controlling the stereo gesture function to be started and closed. If the button in the stereo gesture switch control 501 is slid to the right, the stereo gesture function is started, that is, after a certain opened stereo gesture is detected, the operation corresponding to the stereo gesture is executed; and if not, closing the stereo gesture function, namely detecting that any stereo gesture does not execute the corresponding operation. The drawing O stereoscopic gesture in fig. 5 is a stereoscopic gesture preset by the terminal device or set by the user. After the add stereoscopic gesture interface 503 in fig. 5 receives a click operation of the user, the stereoscopic gesture add interface is displayed. The stereo gesture interface displays preset or set stereo gestures, and fig. 1 is only one example. The examples are intended to illustrate embodiments of the invention and should not be construed as limiting.
3) Receiving a three-dimensional gesture adding instruction;
the receiving of the stereoscopic gesture adding instruction may be detecting a click operation of adding a stereoscopic gesture interface in the stereoscopic gesture setting interface. As shown in fig. 5, after the user clicks the add stereoscopic gesture interface 503, the terminal device displays the stereoscopic gesture add interface.
4) Displaying a three-dimensional gesture adding interface, and collecting action data corresponding to the three-dimensional gesture;
FIG. 6 is a diagram illustrating an exemplary stereoscopic gesture add interface. As shown in fig. 6, the dotted line in the figure represents the trajectory of the stereoscopic gesture detected by the terminal device; 601 is a return interface, and after a user clicks the return interface, a three-dimensional gesture setting interface is displayed; 602, a cancel interface is used, and after a user clicks the cancel interface, the terminal device cancels the detected three-dimensional gesture; 603 is an input interface, and after the user clicks the input interface, the terminal device stores the detected stereo gesture and stores the action data corresponding to the stereo gesture. The acquiring of the motion data corresponding to the stereoscopic gesture may be acquiring the motion data of the terminal device by using a sensor such as a gyroscope. For example, in the case that the terminal device displays the stereoscopic gesture adding interface, the user performs a drawing C stereoscopic gesture as shown in fig. 3, the terminal device displays a trajectory of the drawing C stereoscopic gesture, and a dotted line in fig. 6 represents the trajectory of the drawing C stereoscopic gesture. In practical application, after the terminal device displays the stereoscopic gesture adding interface, a user holds the terminal device by hand to execute a certain stereoscopic gesture, and the terminal device detects the stereoscopic gesture and displays the track of the stereoscopic gesture. After the user finishes the three-dimensional gesture, if the user clicks a cancel interface, deleting the action data corresponding to the detected three-dimensional gesture, and beginning to detect the three-dimensional gesture of the user again; if the user clicks the input interface, storing the detected three-dimensional gesture and the action data corresponding to the three-dimensional gesture, and beginning to detect the three-dimensional gesture of the user again; and if the user clicks the return interface, returning to the stereoscopic gesture setting interface. The terminal device can train the stored motion data corresponding to each three-dimensional gesture to obtain the recognition model of each three-dimensional gesture, so that the three-dimensional gesture set by the user can be recognized accurately.
5) After detecting the action data input instruction, inputting the collected action data into a training model;
the detecting of the motion data input instruction may be detecting a click operation on an input interface in the stereoscopic gesture adding interface. 603 in fig. 6 is an input interface, and the terminal device detects a click operation on the input interface, that is, detects an action data input instruction. The training model may be a training model established based on a neural network. In practical application, after the terminal device displays the three-dimensional gesture adding interface, a user holds the terminal device by hand to execute a certain three-dimensional gesture, and clicks the input interface after the three-dimensional gesture is completed; the terminal equipment inputs the collected action data into the training model; the terminal device trains the input action data by using the training model to obtain a recognition model corresponding to the three-dimensional gesture. Optionally, after the terminal device displays the three-dimensional gesture adding interface, a timer in the terminal device starts timing and collects action data; and if the user does not finish a certain three-dimensional gesture before the timing duration of the timer reaches the time threshold, the terminal equipment cancels the action data acquired this time and displays a three-dimensional gesture setting interface. The time threshold may be 2 seconds, 3 seconds, 5 seconds, etc.
Optionally, after receiving the motion data input instruction, the terminal device restarts to collect the motion data, and inputs the collected motion data to the training model after receiving the motion data input instruction next time. That is, the user may input a plurality of training data. In practical application, after clicking the input interface, the user can continue to execute the three-dimensional gestures for many times, and the terminal device acquires corresponding action data and inputs the acquired action data into the training model.
6) Training the motion data by using the training model to obtain a recognition model corresponding to the motion data;
the specific training process is described in detail in the following examples.
7) And receiving a return instruction, and displaying the updated three-dimensional gesture setting interface.
The receiving the return instruction may be receiving a click operation on the return interface. Fig. 7 is a schematic diagram illustrating human-computer interaction. As shown in fig. 7, after the user inputs a stereoscopic gesture with a trajectory of C and clicks the return interface 701, the stereoscopic gesture setting interface displayed by the terminal device includes a C-drawing stereoscopic gesture switch control 702, that is, a newly added stereoscopic gesture switch control. Through the mode, the user can set the exclusive three-dimensional gesture of the user, and the personalized requirements of the user are met.
8) Receiving a setting operation aiming at a target three-dimensional gesture, and displaying a setting interface of the target three-dimensional gesture;
the target stereo gesture is a preset or set stereo gesture. The receiving of the setting operation for the target stereo gesture may be receiving a clicking operation for the target stereo gesture. FIG. 8 is a diagram illustrating an exemplary setup drawing C stereo gesture. As shown in fig. 8, each stereo gesture included in the stereo gesture setting interface is a start interface, and when one stereo gesture is clicked, the setting interface corresponding to the stereo gesture can be accessed; after the user clicks the three-dimensional gesture for drawing C, the terminal equipment displays a setting interface for drawing the three-dimensional gesture for drawing C; 801 is drawing a C three-dimensional gesture switch control; 802 is a name setting interface, through which a user can set a name corresponding to a C-drawing stereo gesture, such as "draw C screenshot", "draw C-camera", and the like; 803 is an operation setting interface through which a user can set operations corresponding to the drawing-C stereo gesture, such as screen capture operations, starting of a target application, and the like; the interface 804 is an optimized interface, after the interface is clicked, a user executes C-drawing three-dimensional gesture operation, the terminal device stores detected action data, and an identification model corresponding to the C-drawing three-dimensional gesture is optimized; and 805 is a delete interface, and after clicking the interface, the delete painting C stereo gesture.
FIG. 9 is a diagram illustrating another setup drawing C stereo gesture. As shown in fig. 9, after the user clicks the C-drawing stereo gesture, the terminal device displays a setting interface for drawing the C-drawing stereo gesture; 901 is a name setting interface, through which a user can set a name corresponding to the drawing of the C stereo gesture; 902 is an operation setting interface through which a user can set operations corresponding to the drawing of the C-stereoscopic gesture, such as screen capture operations, starting of a target application, and the like; 903 is an optimized interface, after the interface is clicked, a user executes a C-drawing three-dimensional gesture operation, the terminal device stores detected action data, and an identification model corresponding to the C-drawing three-dimensional gesture is optimized; 904 is a deletion interface, and after the interface is clicked, the C stereo gesture is deleted; 905 is an operation adding interface, and after a user clicks the operation adding interface, the terminal equipment is added with an operation setting interface and two state setting bars. The example is only one implementation manner of the embodiment of the present invention, and in practical applications, the name, the corresponding operation, and other information of the stereo gesture may be set in other manners.
Fig. 10 is a schematic diagram exemplarily illustrating an operation of setting the drawing C stereo gesture corresponding to different scenes. As shown in fig. 10, after the user clicks the operation addition interface 1001, a first state setting column 1002, a second state setting column 1003, a first operation setting interface 1004, and a second operation setting interface 1005 are displayed in the stereoscopic gesture setting interface displayed by the terminal device; two scenes corresponding to the drawing C stereo gesture can be set through the two state setting bars, namely a first scene and a second scene; two operations corresponding to the drawing C three-dimensional gesture can be set through the two operation setting interfaces, wherein the first scene corresponds to the operation set by the first operation setting interface, and the second scene corresponds to the operation set by the second operation setting interface; after clicking the operation adding interface 1006, a state setting column and its corresponding operation setting interface are added. In a first scene, after detecting a three-dimensional gesture for drawing C, the terminal equipment executes the operation set by the first operation setting interface; and in a second scene, after the terminal equipment detects the C stereo drawing gesture, executing the operation of setting the second operation setting interface. It can be understood that a user can set a stereo gesture to correspond to different operations in different scenes.
Fig. 11 exemplarily shows a schematic diagram of operations of the setup drawing C stereoscopic gesture corresponding to different time periods. As shown in fig. 11, after the user clicks the operation adding interface 1101, a first state setting field 1102, a second state setting field 1103, a first operation setting interface 1104 and a second operation setting interface 1105 are displayed in the stereoscopic gesture setting interface displayed by the terminal device; two time periods corresponding to the drawing of the C stereo gesture can be set through the two state setting bars, namely a first time period and a second time period; two operations corresponding to the drawing C three-dimensional gesture can be set through the two operation setting interfaces, wherein the first time period corresponds to the operation set by the first operation setting interface, and the second time period corresponds to the operation set by the second operation setting interface; after clicking the operation adding interface 1106, adding a state setting column and an operation setting interface corresponding to the state setting column. In a first time period, after the terminal device detects a three-dimensional gesture for drawing C, executing the operation set by the first operation setting interface; and in a second time period, after the terminal equipment detects the C stereo drawing gesture, executing the operation set by the second operation setting interface. It is understood that a user may set a stereo gesture to correspond to different operations at different time periods.
In the embodiment of the invention, the user can set the three-dimensional gesture and the corresponding relation between the three-dimensional gesture and the operation executed by the terminal equipment, the operation is simple, and the individual requirements of the user can be met.
In an optional implementation manner, the controlling, according to the detected stereoscopic gesture and state of the terminal device, the terminal device to perform corresponding operations includes:
acquiring a target character corresponding to the motion track of the terminal equipment;
obtaining the operation corresponding to the target character according to the state of the terminal equipment;
and controlling the terminal equipment to execute the operation corresponding to the target character.
The obtaining of the target character corresponding to the motion trajectory of the terminal device may be determining the motion trajectory of the detected three-dimensional gesture according to the collected motion data; and determining the target character corresponding to the motion track. The characters corresponding to different motion tracks are different. The motion trajectory in fig. 2 corresponds to the character "O", and the motion trajectory in fig. 3 corresponds to the character "C". The operation of obtaining the target character according to the state of the terminal device may be obtaining a first operation corresponding to the target character when the terminal device is in a first state; when the terminal equipment is in a second state, obtaining a second operation corresponding to the target character; the first operation is different from the second operation.
For example, when the terminal device is in a screen-saving state, the terminal device detects that a three-dimensional gesture in the form of an "O" letter is made clockwise, and starts a flashlight application; and when the terminal equipment is in a bright screen state, detecting that the terminal equipment makes a three-dimensional gesture in an O letter form clockwise, and starting the camera application. It can be understood that the stereo gesture with the motion trajectory of "O" corresponds to the operation of starting the flashlight application in the screen-resting state; and correspondingly starting the operation of the camera application in the bright screen state. For another example, in a music playing scene or a video playing scene, when a three-dimensional gesture with a motion trajectory of "U" is detected, the terminal device increases the volume; and when the three-dimensional gesture with the track of D is detected, the volume of the terminal equipment is turned down. For another example, when the terminal device plays music in the music list, the terminal device detects a stereo gesture with a motion trajectory of "Z", and the terminal device switches the played song to a song beginning with "Z" or a song of an artist beginning with "Z". For another example, the terminal device detects a stereo gesture with a motion trajectory of "L" and starts a certain social application such as WeChat. For another example, when the terminal device displays an interface of a certain social application, it detects a stereoscopic gesture with a motion trajectory of "L", and closes the social application. The above is only an example provided by the embodiment of the present invention, and the embodiment of the present invention does not limit the motion trajectory of the stereo gesture and the operation corresponding to the stereo gesture.
In the embodiment of the invention, the terminal equipment determines the operation to be executed according to the motion track of the terminal equipment, and the operation is simple and the accuracy is high; various operations can be performed quickly.
In an optional implementation manner, the stereoscopic gesture is an action that the amplitude of shaking the terminal device exceeds a first threshold and the frequency of shaking the terminal device exceeds a second threshold; or the three-dimensional gesture is an action of turning over the terminal equipment; the controlling the terminal device to execute corresponding operations includes:
intercepting an interface displayed by the terminal equipment;
or, adjusting the screen brightness of the terminal device;
or starting or closing the target application on the terminal equipment.
It is understood that the above-mentioned stereoscopic gesture is an action of shaking the terminal device. The first threshold may be 0.5 cm, 1 cm, 2 cm, 3 cm, etc. The second threshold may be 5 times per second, 3 times per second, 1 time per 3 seconds, etc. In actual use, the terminal device is frequently shaken, and the stereoscopic gesture is limited by the first threshold and the second threshold, so that the stereoscopic gesture shaking the terminal device can be prevented from being triggered by mistake. For example, when a user holds the terminal device by hand and shakes the terminal device left and right, the terminal device increases the brightness of a screen; and when the user holds the terminal equipment and shakes the terminal equipment up and down, the terminal equipment reduces the brightness of a screen. The target application may be a camera application, a payment application, a social contact application, a short message application, a calendar application, a mailbox application, a reading application, or the like, and the embodiment of the present invention is not limited. For example, a user starts a mailbox application on the terminal device by shaking the terminal device; and after shaking the terminal equipment again, closing the mailbox application by the terminal equipment.
The operation of turning over the terminal device may be an operation of turning over the terminal device by an angle exceeding a target angle. The above target angle may be 30 °, 45 °, 60 °, 90 °, 180 °, or the like. For example, when the terminal device with the upward screen is turned to the downward screen, the turning angle is 90 °. For example, the user may perform a screen shot by shaking the terminal device. For another example, the user may start an application by turning the terminal device. For another example, when the terminal device plays the video, it detects the flipping action and closes the running video playing program. For another example, when the terminal device detects an incoming call, it detects a flipping action and rejects the incoming call.
The capturing of the interface displayed by the terminal device, the adjusting of the screen brightness of the terminal device, and the starting of the target application on the terminal device are only specific examples of the control operation corresponding to the control instruction, and the control instruction in the embodiment of the present invention may implement other control operations.
In the embodiment of the invention, a certain function can be quickly realized by turning or shaking the three-dimensional gesture of the terminal equipment, and the operation is simple.
In an optional implementation manner, the first operation is to start a first application, and the second operation is to start a second application; the first application scene is in a bright screen state, and the second application scene is in a dark screen state;
or, the first operation is a screen capture operation, and the second operation is a brightness adjustment operation; the first application scene is a game interface, and the second application scene is a video interface.
The first application and the second application are different applications. The first application may be a reading application, a mailbox application, a calendar application, etc. The second application may be a camera application, an album application, a map application, or the like. The embodiment of the invention does not limit the first application, the second application, the first application scene and the second application scene. In the embodiment of the present invention, one stereo gesture may correspond to three or more operations, and the number of operations corresponding to one stereo gesture is not limited in the embodiment of the present invention. For example, the terminal device detects a first stereoscopic gesture in a screen-on-screen state, and starts a calendar application; and detecting the first three-dimensional gesture in a bright screen state, and starting a map application. For another example, when a user uses a terminal device to play a game, the terminal device is shaken, and the terminal device intercepts a current game interface; when a user plays a video by using the terminal equipment, the terminal equipment is shaken, and the terminal equipment adjusts the brightness of a video interface.
In the embodiment of the invention, the terminal devices are in different states, the user can complete different operations executed by the same three-dimensional gesture terminal device, different functions can be realized through one three-dimensional gesture, and the operation efficiency can be improved.
In an optional implementation manner, before the controlling the terminal device to execute the corresponding operation according to the detected stereoscopic gesture and state of the terminal device, the method further includes:
after receiving a three-dimensional gesture collection instruction, collecting training data, wherein the training data are N pieces of action data corresponding to N reference three-dimensional gestures, and the N reference three-dimensional gestures correspond to the three-dimensional gestures;
training the training data by adopting a neural network algorithm to obtain a recognition model corresponding to the three-dimensional gesture;
receiving a setting instruction, and setting the operation corresponding to the three-dimensional gesture according to the setting instruction;
the detecting the stereoscopic gesture includes:
and determining the three-dimensional gesture according to the recognition model.
The above N is an integer of more than 1. It can be understood that the training data is motion data acquired by the terminal device and used for completing the three-dimensional gesture for N times by the user holding the terminal device. For example, a user wants to set a certain three-dimensional gesture, and the user can complete the three-dimensional gesture for multiple times by using the handheld terminal device after inputting a three-dimensional gesture acquisition instruction; the terminal equipment acquires corresponding action data, and trains the acquired action data by utilizing a neural network algorithm to obtain an identification model corresponding to the action data; the terminal device can recognize the stereoscopic gesture by using the recognition model. The receiving of the stereoscopic gesture collection instruction by the terminal device may be an operation of detecting that the user clicks the stereoscopic gesture addition interface. As shown in fig. 5 and 6, after the user clicks the stereoscopic gesture adding interface, the terminal device displays the gesture adding interface. And under the condition that the terminal equipment displays the gesture adding interface, acquiring N pieces of action data corresponding to N times of reference stereo gestures executed by a user to obtain the training data. For example, when the terminal device displays the gesture addition interface shown in fig. 6, the user performs a C-drawing gesture operation, and the terminal device collects training data corresponding to the C-drawing gesture operation.
A specific example of establishing a recognition model corresponding to a stereo gesture is provided below:
and under the condition that the terminal equipment does not establish a recognition model corresponding to the three-dimensional gesture, the terminal equipment trains the training data to obtain the recognition model corresponding to the three-dimensional gesture.
Under the condition that the terminal device establishes a recognition model corresponding to the three-dimensional gesture, recognizing the training data by using the recognition model corresponding to the three-dimensional gesture to obtain a recognition result, wherein the recognition result indicates the three-dimensional gesture corresponding to the training data; displaying the recognition result, a first interface and a second interface, wherein the first interface is an interface for confirming the recognition result, and the second interface is a result for denying the recognition result; if the clicking operation on the first interface is received, optimizing a recognition model corresponding to the three-dimensional gesture by using the training data; and if the clicking operation on the second interface is received, adding a label corresponding to the training data in a recognition model corresponding to the three-dimensional gesture, training the motion data by using the training model, and updating the three-dimensional gesture recognition model. The updated stereo gesture recognition model can recognize the stereo gesture. Fig. 12 is a schematic diagram exemplarily illustrating a recognition result interface, where 1201 in the diagram is a first interface, 1202 is a second interface, and 1203 is a recognition result.
Optionally, in the embodiment of the present invention, the terminal device may train the training data by using other algorithms such as a deep learning algorithm and a machine learning algorithm, so as to obtain the recognition model. The process that the terminal device trains the training data to obtain the recognition model corresponding to the three-dimensional gesture is as follows;
1) the terminal equipment inputs training data into a training model;
the training model can adopt a three-layer neural network, the number of nodes of an input layer can be 300, the number of nodes of a hidden layer can be 15, 20, 25 and the like, and the number of nodes of an output layer is 3. The training data at least includes 10 groups of data corresponding to the same stereo gesture, and each row in table 1 represents one group of data. Alternatively, the training model may adopt other neural networks, and the embodiment of the present invention is not limited.
2) Training the training data by using the training model;
optionally, 70% of the training data is used as training data, and the other 30% is used as verification data. To improve the accuracy of the training, a cross-training method may be used: selecting 70% of training data for training each time; the precision can be continuously improved through training until the precision of the recognition model reaches more than 90%.
3) And stopping training under the condition that the result of the verification data is more than 90%, and obtaining the recognition model corresponding to the training data.
Optionally, the training is resumed after the training time exceeds 1 minute. Optionally, if the result of the verification data is less than or equal to 90%, the recognition model with the largest recognition rate is saved, and the user is fed back to continue inputting the stereo gesture, or the stereo gesture is abandoned. Optionally, a plurality of recognition models may be trained, and the recognition model with the highest recognition rate is selected as the final recognition model.
The terminal device can quickly establish the recognition model corresponding to the training data by executing the operation.
In the embodiment of the present invention, the operation corresponding to the stereoscopic gesture may be set in a manner described in the specific example of setting the stereoscopic gesture.
In the embodiment of the invention, terminal equipment acquires training data and determines a three-dimensional gesture and a recognition model corresponding to the training data; on one hand, the recognition model corresponding to the three-dimensional gesture can be quickly established; on the other hand, the user can quickly set the stereo gesture and the operation corresponding to the stereo gesture, and the operation is simple.
In an optional implementation manner, after the controlling the terminal device to perform the corresponding operation according to the detected stereoscopic gesture and state of the terminal device, the method further includes:
and updating the recognition model by utilizing the action data corresponding to the three-dimensional gesture.
The updating of the recognition model by using the motion data may be performed by inputting the motion data and existing training data into a training model and training the training model to obtain a new recognition model; the existing training data is training data corresponding to the recognition model. It can be understood that the more training data, the higher the accuracy of recognizing the stereo gesture by the recognition model. The recognition model can be continuously optimized by using the motion data. That is, the recognition model has an increasing probability of correctly recognizing the stereo gesture.
In the embodiment of the invention, the recognition model can be further optimized by utilizing the action data, and the realization is simple.
An embodiment of the present invention provides a screen capture method, as shown in fig. 13, which may include:
1301. detecting a three-dimensional gesture, wherein the three-dimensional gesture is related to a motion track of the terminal equipment in a three-dimensional space;
the specific implementation is the same as 101 in fig. 1. The three-dimensional gesture is a three-dimensional gesture for shaking the terminal device.
1302. Detecting the state of the terminal equipment;
the specific implementation is the same as 102 in fig. 1. The terminal equipment is in a bright screen state.
1303. Controlling the terminal equipment to execute screen capture operation according to the detected three-dimensional gesture and state of the terminal equipment to obtain a screen capture;
the specific implementation is the same as 103 in fig. 1. In the embodiment of the invention, the terminal equipment executes screen capture operation after detecting the three-dimensional gesture of shaking the terminal equipment in a bright screen state, namely, an interface currently displayed on a screen of the terminal equipment is captured.
1304. Displaying prompt information;
the prompt message may be 'turning over the terminal device, and keeping the screenshot'.
1305. Judging whether a three-dimensional gesture for turning the terminal equipment is detected;
if so, 1306 is performed, otherwise 1307 is performed. The determining whether the stereo gesture for flipping the terminal device is detected may be determining whether the stereo gesture for flipping the terminal device is detected within a preset time period. The preset time period may be 3 seconds, 5 seconds, 10 seconds, etc.
1306. Storing the screenshot;
the storing of the screenshot can store the screenshot in an album, a gallery and the like.
1307. And deleting the screenshot.
In practical application, a user can intercept an interface displayed by the terminal equipment by shaking the terminal equipment to obtain a screenshot; if the terminal equipment is turned over within a preset time length, storing the screenshot; otherwise, the screenshot is deleted. The user can also realize screen capture operation through other stereo gestures. For example, when the terminal device plays a video, a user holds the terminal device to draw a semicircle anticlockwise, the terminal device performs screen capture operation to obtain a screenshot, and prompt information of turning over the terminal device and keeping the screenshot is displayed; and the user turns over the terminal equipment and stores the screenshot.
In the embodiment of the invention, the screen capturing operation can be quickly completed, and the operation is simple.
Fig. 14 shows a functional block diagram of a terminal device according to an embodiment of the present invention. The functional blocks of the terminal device may implement the inventive arrangements in hardware, software or a combination of hardware and software. Those skilled in the art will appreciate that the functional blocks described in FIG. 14 may be combined or separated into sub-blocks to implement the present inventive scheme. Thus, the above description of the invention may support any possible combination or separation or further definition of the functional blocks described below.
As shown in fig. 14, the terminal device may include:
a first detection unit 1401, configured to detect a stereoscopic gesture, where the stereoscopic gesture is related to a motion trajectory of the terminal device in a stereoscopic space;
a second detecting unit 1402, configured to detect a state of the terminal device;
a control unit 1403, configured to control the terminal device to execute a corresponding operation according to the detected stereoscopic gesture and state of the terminal device.
The specific implementation method is the same as that in fig. 1, and is not described in detail here.
In an optional implementation manner, the state of the terminal device includes a first application scenario or a second application scenario;
the control unit 1403 is specifically configured to control the terminal device to execute a first operation according to the detected stereo gesture and the first application scene; or specifically, the terminal device is controlled to execute a second operation according to the detected stereoscopic gesture and the second application scene; the first operation is different from the second operation.
In an optional implementation manner, the state of the terminal device includes a first application scenario or a second application scenario;
the control unit 1403 is specifically configured to control the terminal device to execute a first operation according to the detected stereo gesture and the first application scene; or specifically, the terminal device is controlled to execute a second operation according to the detected stereoscopic gesture and the second application scene; the first operation is the same as the second operation.
In an optional implementation manner, the first detection unit 1701 is further configured to detect a stereo gesture set by a user; the above terminal device further includes:
the storage unit 1404 is configured to store a corresponding relationship between the stereoscopic gesture set by the user and the operation performed by the terminal device.
In an optional implementation manner, the state of the terminal device includes being in a first time period or a second time period;
the control unit 1403 is specifically configured to control the terminal device to execute a first operation when the terminal device is in the first time period; or specifically configured to control the terminal device to execute a second operation when the terminal device is in the second time period; the first operation is different from the second operation.
In an optional implementation manner, the first detecting unit 1401 is specifically configured to obtain a target character corresponding to a motion trajectory of the terminal device; obtaining the operation corresponding to the target character according to the state of the terminal equipment;
the control unit 1403 is specifically configured to control the terminal device to execute an operation corresponding to the target character.
In an optional implementation manner, the stereoscopic gesture is an action that the amplitude of shaking the terminal device exceeds a first threshold and the frequency of shaking the terminal device exceeds a second threshold; or the three-dimensional gesture is an action of turning over the terminal equipment;
the control unit 1403 is specifically configured to intercept an interface displayed by the terminal device;
or, specifically, the method is used for adjusting the screen brightness of the terminal device;
or, specifically, the method is used to start or close the target application on the terminal device.
In an optional implementation manner, the control unit 1403 is specifically configured to control the terminal device to perform a screen capture operation when the terminal device detects the first stereo gesture and is in a bright screen state; the first three-dimensional gesture corresponds to the screen capturing operation;
or, the control unit is specifically configured to control the terminal device to execute a volume adjustment operation when the terminal device detects the second stereo gesture and is in a music playing state;
or, the control unit is specifically configured to control the terminal device to execute a brightness adjustment operation when the terminal device detects the third stereo gesture and is in a video playing state;
or, the control unit is specifically configured to control the terminal device to start the flashlight function when the terminal device detects the fourth stereo gesture and the illuminance of the environment is smaller than the first illuminance;
or, the control unit is specifically configured to control the terminal device to start the photographing function when the terminal device detects the fifth stereo gesture and the illuminance of the environment where the terminal device is located is greater than the second illuminance.
In an optional implementation manner, the first operation is to start a first application, and the second operation is to start a second application; the first application scene is in a bright screen state, and the second application scene is in a dark screen state;
or, the first operation is a screen capture operation, and the second operation is a brightness adjustment operation; the first application scene is a game interface, and the second application scene is a video interface.
In an optional implementation manner, the terminal device further includes:
a receiving unit 1405, configured to receive a stereoscopic gesture collecting instruction;
an acquisition unit 1406, configured to acquire training data, where the training data are N pieces of motion data corresponding to N reference stereo gestures, and the N reference stereo gestures all correspond to the stereo gesture;
the receiving unit 1405 is further configured to receive a setting instruction, and set an operation corresponding to the stereoscopic gesture according to the setting instruction;
the first detection unit is specifically configured to determine the three-dimensional gesture according to the recognition model.
In an optional implementation manner, the terminal device further includes:
an updating unit 1407 is configured to update the recognition model by using the motion data corresponding to the stereoscopic gesture.
Fig. 15 is a schematic block diagram of a terminal device according to another embodiment of the present invention. As shown in fig. 15, the terminal device in this embodiment may include: one or more processors 1501; one or more input devices 1502, one or more output devices 1503, and memory 1504. The processor 1501, the input device 1502, the output device 1503, and the memory 1504 are connected by a bus 1505. The memory 1502 is used to store computer programs comprising program instructions, and the processor 1501 is used to execute the program instructions stored by the memory 1502. Wherein, the processor 1501 is configured to call the above program instructions to execute: detecting a three-dimensional gesture, wherein the three-dimensional gesture is related to a motion track of the terminal equipment in a three-dimensional space; detecting the state of the terminal equipment; and controlling the terminal equipment to execute corresponding operation according to the detected three-dimensional gesture and state of the terminal equipment.
It should be understood that in the present embodiment, the Processor 1501 may be a Central Processing Unit (CPU), or other general-purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The processor 1501 described above can realize the functions of the control unit 1403, the second detection unit 1402, and the update unit 1407 shown in fig. 14. Accordingly, the processor 1501 may also implement other data processing functions and control functions in the foregoing method embodiments.
The input device 1502 may include a touch pad, a fingerprint sensor (for collecting fingerprint information of a user and direction information of a fingerprint), a microphone, a gravity sensor, a gyroscope, etc., and the output device 1503 may include a display (LCD, etc.), a speaker, etc. The gravity sensor is used for detecting acceleration, and the gyroscope is used for detecting angular velocity. The input device 1502 described above can realize the functions of the first detection unit 1501, the reception unit 1405, and the acquisition unit 1406 as shown in fig. 14. Specifically, the input device 1502 may receive an instruction sent by a user through a touch pad; the motion data is acquired by a gravity sensor, a gyroscope, or the like.
The memory 1504 may include read-only memory and random access memory, and provides instructions and data to the processor 1501. A portion of the memory 1504 may also include non-volatile random access memory. For example, the memory 1504 may also store device type information. The memory 1504 described above may implement the functionality of the storage unit 1404 shown in fig. 14.
In a specific implementation, the processor 1501, the input device 1502, the output device 1503, and the memory 1504 described in the embodiment of the present invention may execute the implementation described in the control method of the terminal device provided in the embodiment of the present invention, or may execute the implementation of the terminal device described in the embodiment of the present invention, which is not described herein again.
In another embodiment of the present invention, a computer-readable storage medium is provided, the computer-readable storage medium storing a computer program, the computer program comprising program instructions that when executed by a processor implement: acquiring action data, wherein the action data is motion state data of the terminal equipment; detecting a three-dimensional gesture, wherein the three-dimensional gesture is related to a motion track of the terminal equipment in a three-dimensional space; detecting the state of the terminal equipment; and controlling the terminal equipment to execute corresponding operation according to the detected three-dimensional gesture and state of the terminal equipment.
The computer readable storage medium may be an internal storage unit of the device according to any of the embodiments, for example, a hard disk or a memory of the device. The computer readable storage medium may be an external storage device of the apparatus, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like provided in the apparatus. Further, the computer-readable storage medium may include both an internal storage unit and an external storage device of the apparatus. The computer-readable storage medium is used for storing the computer program and other programs and data required by the apparatus. The above-described computer-readable storage medium may also be used to temporarily store data that has been output or is to be output.
While the invention has been described with reference to specific embodiments, the invention is not limited thereto, and various equivalent modifications and substitutions can be easily made by those skilled in the art within the technical scope of the invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.
Claims (24)
- A control method of a terminal device, comprising:detecting a three-dimensional gesture, wherein the three-dimensional gesture is related to a motion track of the terminal device in a three-dimensional space;detecting the state of the terminal equipment;and controlling the terminal equipment to execute corresponding operation according to the detected three-dimensional gesture and state of the terminal equipment.
- The method of claim 1,the state of the terminal equipment comprises a first application scene or a second application scene;the controlling the terminal device to execute corresponding operations according to the detected three-dimensional gesture and state of the terminal device comprises:controlling the terminal equipment to execute a first operation according to the detected three-dimensional gesture and the first application scene;orControlling the terminal equipment to execute a second operation according to the detected stereoscopic gesture and the second application scene;the first operation is different from the second operation.
- The method of claim 1,the state of the terminal equipment comprises a first application scene or a second application scene;the controlling the terminal device to execute corresponding operations according to the detected three-dimensional gesture and state of the terminal device comprises:controlling the terminal equipment to execute a first operation according to the detected three-dimensional gesture and the first application scene;orControlling the terminal equipment to execute a second operation according to the detected stereoscopic gesture and the second application scene;the first operation is the same as the second operation.
- The method according to any one of claims 1 to 3, wherein before controlling the terminal device to perform corresponding operations according to the detected stereoscopic gesture and state of the terminal device, the method further comprises:and the terminal equipment detects and stores the corresponding relation between the three-dimensional gesture set by the user and the operation executed by the terminal equipment.
- The method of claim 4,the state of the terminal equipment comprises a first time period or a second time period;the step of controlling the terminal equipment to execute corresponding operations according to the detected three-dimensional gesture and state of the terminal equipment comprises the following steps;under the condition that the terminal equipment is in the first time period, controlling the terminal equipment to execute a first operation;orUnder the condition that the terminal equipment is in the second time period, controlling the terminal equipment to execute a second operation;the first operation is different from the second operation.
- The method according to claim 4, wherein the controlling the terminal device to perform corresponding operations according to the detected stereoscopic gesture and the detected state of the terminal device comprises:obtaining a target character corresponding to the motion track of the terminal equipment;obtaining the operation corresponding to the target character according to the state of the terminal equipment;and controlling the terminal equipment to execute the operation corresponding to the target character.
- The method of claim 4, wherein the stereo gesture is an action of shaking the terminal device by an amplitude exceeding a first threshold and shaking the terminal device by a frequency exceeding a second threshold; or the three-dimensional gesture is an action of turning over the terminal equipment; the controlling the terminal device to execute corresponding operations comprises:intercepting an interface displayed by the terminal equipment;or adjusting the screen brightness of the terminal equipment;or starting or closing the target application on the terminal equipment.
- The method according to claim 4, wherein the controlling the terminal device to perform corresponding operations according to the detected stereoscopic gesture and the detected state of the terminal device comprises:when the terminal device detects a first three-dimensional gesture and is in a bright screen state, controlling the terminal device to execute screen capturing operation;or controlling the terminal equipment to execute volume adjustment operation when the terminal equipment detects the second three-dimensional gesture and is in a music playing state;or controlling the terminal equipment to execute brightness adjustment operation when the terminal equipment detects the third three-dimensional gesture and is in a video playing state;or controlling the terminal device to start a flashlight function under the condition that the terminal device detects the fourth stereo gesture and the illumination of the environment is smaller than the first illumination;or controlling the terminal device to start the photographing function under the condition that the terminal device detects the fifth three-dimensional gesture and the illumination of the environment where the terminal device is located is greater than the second illumination.
- The method of claim 2, wherein the first operation is launching a first application and the second operation is launching a second application; the first application scene is in a bright screen state, and the second application scene is in a screen-off state;or the first operation is screen capture operation, and the second operation is brightness adjustment operation; the first application scene is a game interface, and the second application scene is a video interface.
- The method according to any one of claims 1-9, wherein before controlling the terminal device to perform corresponding operations according to the detected stereoscopic gesture and state of the terminal device, the method further comprises:after receiving a three-dimensional gesture collection instruction, collecting training data, wherein the training data are N pieces of action data corresponding to N reference three-dimensional gestures, and the N reference three-dimensional gestures correspond to the three-dimensional gestures;training the training data by adopting a neural network algorithm to obtain a recognition model corresponding to the three-dimensional gesture;receiving a setting instruction, and setting the operation corresponding to the three-dimensional gesture according to the setting instruction;the detecting a stereoscopic gesture includes:and determining the stereoscopic gesture according to the recognition model.
- The method according to claim 10, wherein after controlling the terminal device to perform corresponding operations according to the detected stereoscopic gesture and state of the terminal device, the method further comprises:and updating the recognition model by utilizing the action data corresponding to the stereo gesture.
- A terminal device, comprising:the first detection unit is used for detecting a three-dimensional gesture, and the three-dimensional gesture is related to a motion track of the terminal equipment in a three-dimensional space;the second detection unit is used for detecting the state of the terminal equipment;and the control unit is used for controlling the terminal equipment to execute corresponding operation according to the detected three-dimensional gesture and state of the terminal equipment.
- The terminal device according to claim 12, wherein the state of the terminal device comprises a first application scenario or a second application scenario;the control unit is specifically configured to control the terminal device to execute a first operation according to the detected stereoscopic gesture and the first application scene; or specifically, the terminal device is controlled to execute a second operation according to the detected stereoscopic gesture and the second application scene; the first operation is different from the second operation.
- The terminal device according to claim 12, wherein the state of the terminal device comprises a first application scenario or a second application scenario;the control unit is specifically configured to control the terminal device to execute a first operation according to the detected stereoscopic gesture and the first application scene; or specifically, the terminal device is controlled to execute a second operation according to the detected stereoscopic gesture and the second application scene; the first operation is the same as the second operation.
- The terminal device according to any one of claims 12 to 14, wherein the first detecting unit is further configured to detect a stereo gesture set by a user; the terminal device further includes:and the storage unit is used for storing the corresponding relation between the three-dimensional gesture set by the user and the operation executed by the terminal equipment.
- The terminal device of claim 15, wherein the state of the terminal device comprises being in a first time period or a second time period;the control unit is specifically configured to control the terminal device to execute a first operation when the terminal device is in the first time period; or specifically, the terminal device is controlled to execute a second operation when the terminal device is in the second time period; the first operation is different from the second operation.
- The terminal device according to claim 15, wherein the first detecting unit is specifically configured to obtain a target character corresponding to a motion trajectory of the terminal device; obtaining the operation corresponding to the target character according to the state of the terminal equipment;the control unit is specifically configured to control the terminal device to execute an operation corresponding to the target character.
- The terminal device of claim 15, wherein the stereo gesture is an action of shaking the terminal device by an amplitude exceeding a first threshold and shaking the terminal device by a frequency exceeding a second threshold; or the three-dimensional gesture is an action of turning over the terminal equipment;the control unit is specifically used for intercepting an interface displayed by the terminal equipment;or, specifically, the method is used for adjusting the screen brightness of the terminal device;or, specifically, the method is used to start or close the target application on the terminal device.
- The terminal device of claim 15,the control unit is specifically used for controlling the terminal equipment to execute screen capture operation when the terminal equipment detects a first three-dimensional gesture and is in a bright screen state;or, the control unit is specifically configured to control the terminal device to execute a volume adjustment operation when the terminal device detects the second stereo gesture and is in a music playing state;or, the control unit is specifically configured to control the terminal device to execute a brightness adjustment operation when the terminal device detects the third stereo gesture and is in a video playing state;or, specifically, the method is used for controlling the terminal device to start the flashlight function when the terminal device detects the fourth stereo gesture and the illuminance of the environment is smaller than the first illuminance;or, the control unit is specifically configured to control the terminal device to start the photographing function when the terminal device detects the fifth stereo gesture and the illuminance of the environment where the terminal device is located is greater than the second illuminance.
- The terminal device according to claim 13, wherein the first operation is launching a first application, and the second operation is launching a second application; the first application scene is in a bright screen state, and the second application scene is in a screen-off state;or the first operation is screen capture operation, and the second operation is brightness adjustment operation; the first application scene is a game interface, and the second application scene is a video interface.
- The terminal device according to any of claims 12 to 20, wherein the terminal device further comprises:the receiving unit is used for receiving a three-dimensional gesture acquisition instruction;the system comprises an acquisition unit, a display unit and a control unit, wherein the acquisition unit is used for acquiring training data, the training data are N pieces of action data corresponding to N reference three-dimensional gestures, and the N reference three-dimensional gestures correspond to the three-dimensional gestures;the receiving unit is further used for receiving a setting instruction and setting the operation corresponding to the three-dimensional gesture according to the setting instruction;the first detection unit is specifically configured to determine the stereoscopic gesture according to the recognition model.
- The method of claim 21, wherein the terminal device further comprises:and the updating unit is used for updating the recognition model by utilizing the action data corresponding to the three-dimensional gesture.
- A terminal device, characterized in that it comprises a processor and a memory, said processor and memory being interconnected, wherein said memory is adapted to store a computer program comprising program instructions, said processor being configured to invoke said program instructions to perform the method according to any one of claims 1-11.
- A computer-readable storage medium, characterized in that the computer storage medium stores a computer program comprising program instructions that, when executed by a processor, cause the processor to perform the method according to any of claims 1-11.
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2017/118117 WO2019119450A1 (en) | 2017-12-22 | 2017-12-22 | Terminal device control method, terminal device and computer readable medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110573999A true CN110573999A (en) | 2019-12-13 |
Family
ID=66992936
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201780090095.2A Pending CN110573999A (en) | 2017-12-22 | 2017-12-22 | Terminal device control method, terminal device, and computer-readable medium |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN110573999A (en) |
WO (1) | WO2019119450A1 (en) |
Citations (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103812996A (en) * | 2012-11-08 | 2014-05-21 | 腾讯科技(深圳)有限公司 | Information prompting method and apparatus, and terminal |
CN104750239A (en) * | 2013-12-30 | 2015-07-01 | 中国移动通信集团公司 | Application method and equipment based on spatial gesture access terminal equipment |
CN105094659A (en) * | 2014-05-19 | 2015-11-25 | 中兴通讯股份有限公司 | Method and terminal for operating applications based on gestures |
CN105635948A (en) * | 2015-12-31 | 2016-06-01 | 上海创功通讯技术有限公司 | Data sending method and data sending module |
CN105843533A (en) * | 2016-03-15 | 2016-08-10 | 乐视网信息技术(北京)股份有限公司 | List calling method and device |
CN105867818A (en) * | 2016-03-30 | 2016-08-17 | 乐视控股(北京)有限公司 | Terminal interaction control device |
US20160248899A1 (en) * | 2014-10-08 | 2016-08-25 | Lg Electronics Inc. | Mobile terminal and controlling method thereof |
CN105988693A (en) * | 2015-02-03 | 2016-10-05 | 中兴通讯股份有限公司 | Application control method and device |
CN106161556A (en) * | 2015-04-20 | 2016-11-23 | 中兴通讯股份有限公司 | The control method of a kind of terminal and device |
CN106227350A (en) * | 2016-07-28 | 2016-12-14 | 青岛海信电器股份有限公司 | Method and the smart machine that operation controls is carried out based on gesture |
CN106249990A (en) * | 2016-07-19 | 2016-12-21 | 宇龙计算机通信科技(深圳)有限公司 | The management method of application program, managing device and terminal |
CN106254551A (en) * | 2016-09-30 | 2016-12-21 | 北京珠穆朗玛移动通信有限公司 | The document transmission method of a kind of dual system and mobile terminal |
CN106878543A (en) * | 2016-12-29 | 2017-06-20 | 宇龙计算机通信科技(深圳)有限公司 | A kind of terminal operation management method, device and terminal |
CN106973330A (en) * | 2017-03-20 | 2017-07-21 | 腾讯科技(深圳)有限公司 | A kind of screen live broadcasting method, device and system |
CN107239199A (en) * | 2017-06-29 | 2017-10-10 | 珠海市魅族科技有限公司 | It is a kind of to operate the method responded and relevant apparatus |
-
2017
- 2017-12-22 WO PCT/CN2017/118117 patent/WO2019119450A1/en active Application Filing
- 2017-12-22 CN CN201780090095.2A patent/CN110573999A/en active Pending
Patent Citations (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103812996A (en) * | 2012-11-08 | 2014-05-21 | 腾讯科技(深圳)有限公司 | Information prompting method and apparatus, and terminal |
CN104750239A (en) * | 2013-12-30 | 2015-07-01 | 中国移动通信集团公司 | Application method and equipment based on spatial gesture access terminal equipment |
CN105094659A (en) * | 2014-05-19 | 2015-11-25 | 中兴通讯股份有限公司 | Method and terminal for operating applications based on gestures |
US20160248899A1 (en) * | 2014-10-08 | 2016-08-25 | Lg Electronics Inc. | Mobile terminal and controlling method thereof |
CN105988693A (en) * | 2015-02-03 | 2016-10-05 | 中兴通讯股份有限公司 | Application control method and device |
CN106161556A (en) * | 2015-04-20 | 2016-11-23 | 中兴通讯股份有限公司 | The control method of a kind of terminal and device |
CN105635948A (en) * | 2015-12-31 | 2016-06-01 | 上海创功通讯技术有限公司 | Data sending method and data sending module |
CN105843533A (en) * | 2016-03-15 | 2016-08-10 | 乐视网信息技术(北京)股份有限公司 | List calling method and device |
CN105867818A (en) * | 2016-03-30 | 2016-08-17 | 乐视控股(北京)有限公司 | Terminal interaction control device |
CN106249990A (en) * | 2016-07-19 | 2016-12-21 | 宇龙计算机通信科技(深圳)有限公司 | The management method of application program, managing device and terminal |
CN106227350A (en) * | 2016-07-28 | 2016-12-14 | 青岛海信电器股份有限公司 | Method and the smart machine that operation controls is carried out based on gesture |
CN106254551A (en) * | 2016-09-30 | 2016-12-21 | 北京珠穆朗玛移动通信有限公司 | The document transmission method of a kind of dual system and mobile terminal |
CN106878543A (en) * | 2016-12-29 | 2017-06-20 | 宇龙计算机通信科技(深圳)有限公司 | A kind of terminal operation management method, device and terminal |
CN106973330A (en) * | 2017-03-20 | 2017-07-21 | 腾讯科技(深圳)有限公司 | A kind of screen live broadcasting method, device and system |
CN107239199A (en) * | 2017-06-29 | 2017-10-10 | 珠海市魅族科技有限公司 | It is a kind of to operate the method responded and relevant apparatus |
Also Published As
Publication number | Publication date |
---|---|
WO2019119450A1 (en) | 2019-06-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20210034192A1 (en) | Systems and methods for identifying users of devices and customizing devices to users | |
CN105224195B (en) | Terminal operation method and device | |
CN107239535A (en) | Similar pictures search method and device | |
CN103955275B (en) | Application control method and apparatus | |
CN104092932A (en) | Acoustic control shooting method and device | |
WO2019105237A1 (en) | Image processing method, computer device, and computer-readable storage medium | |
CN106020796A (en) | Interface display method and device | |
CN104991910B (en) | Photograph album creation method and device | |
CN107870999B (en) | Multimedia playing method, device, storage medium and electronic equipment | |
CN109871843A (en) | Character identifying method and device, the device for character recognition | |
CN112052897B (en) | Multimedia data shooting method, device, terminal, server and storage medium | |
CN109446961A (en) | Pose detection method, device, equipment and storage medium | |
CN103955274B (en) | Application control method and apparatus | |
CN105335714B (en) | Photo processing method, device and equipment | |
CN113900577B (en) | Application program control method and device, electronic equipment and storage medium | |
CN107766820A (en) | Image classification method and device | |
CN108108671A (en) | Description of product information acquisition method and device | |
CN106791092A (en) | The searching method and device of contact person | |
CN105867794A (en) | Acquisition method and device of associated information of screen locking wallpaper | |
CN108256071B (en) | Method and device for generating screen recording file, terminal and storage medium | |
CN103914151A (en) | Information display method and device | |
CN105426904B (en) | Photo processing method, device and equipment | |
CN106997356A (en) | The sorting technique and device of picture | |
CN106598445A (en) | Method and device for outputting communication message | |
JP5278912B2 (en) | COMMUNICATION DEVICE, COMMUNICATION METHOD, AND PROGRAM |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |