WO2019119450A1 - 终端设备的控制方法、终端设备及计算机可读介质 - Google Patents

终端设备的控制方法、终端设备及计算机可读介质 Download PDF

Info

Publication number
WO2019119450A1
WO2019119450A1 PCT/CN2017/118117 CN2017118117W WO2019119450A1 WO 2019119450 A1 WO2019119450 A1 WO 2019119450A1 CN 2017118117 W CN2017118117 W CN 2017118117W WO 2019119450 A1 WO2019119450 A1 WO 2019119450A1
Authority
WO
WIPO (PCT)
Prior art keywords
terminal device
gesture
stereo
stereoscopic
state
Prior art date
Application number
PCT/CN2017/118117
Other languages
English (en)
French (fr)
Inventor
郭到鑫
高华江
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to CN201780090095.2A priority Critical patent/CN110573999A/zh
Priority to PCT/CN2017/118117 priority patent/WO2019119450A1/zh
Publication of WO2019119450A1 publication Critical patent/WO2019119450A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor

Definitions

  • the present invention relates to the field of electronic technologies, and in particular, to a terminal device control method, a terminal device, and a computer readable medium.
  • Terminal devices such as mobile phones have been widely used.
  • Terminal devices have more and more functions, such as camera functions, flashlight functions, screen capture functions, and video recording functions.
  • the general way people use a certain function of a terminal device is to activate the function through an application icon corresponding to the function. In this way, people often need to light up the screen, then unlock the screen, and finally find the corresponding icon and click. For example, if a user wants the camera function of the terminal device, the user needs to perform the following operations in sequence: lighting the screen, unlocking the screen, finding the camera application, and starting the camera application. This method is complicated to operate and takes a long time.
  • one can also implement a certain function of the terminal device by simultaneously pressing two buttons on the terminal device. For example, the user can simultaneously press and hold the volume down button and the power button for a few seconds to complete the screen capture operation. Since the user is required to press two buttons at the same time, the user is likely to need to press repeatedly several times to complete the screen capture operation. It can be seen that the probability of failure in this way is higher and takes longer.
  • the above technical solution has the drawback that the operation is complicated and takes a long time.
  • the embodiment of the invention provides a terminal device control method, a terminal device and a computer readable medium, which are simple in operation and high in accuracy, and can meet the personalized requirements of the user.
  • an embodiment of the present invention provides a method for controlling a terminal device, where the method includes:
  • the stereo gesture being related to a motion trajectory of the terminal device in a stereo space
  • the stereoscopic gesture refers to an operation in which the handheld terminal device moves the terminal device or changes the posture of the terminal device.
  • the terminal device can detect a stereo gesture using a gravity sensor, a displacement sensor, a gyroscope, an attitude sensor, or the like.
  • the terminal device may be configured with a corresponding relationship between the stereoscopic gesture and an operation performed by the terminal device; and determining, according to the correspondence relationship and the stereoscopic gesture, an operation that the terminal device needs to perform.
  • the prompting interface may include a confirmation option and a rejection option; the terminal device performs the operation after detecting a click operation on the confirmation option; the terminal device detects a click operation on the rejection option , refuse to perform the operation.
  • the terminal device determines an operation required to be performed by the terminal device according to the detected stereoscopic gesture and the state of the terminal device; the control operation can be quickly implemented, and the operation efficiency is high.
  • the status of the terminal device includes a first application scenario or a second application scenario
  • the controlling the terminal device to perform a corresponding operation according to the detected stereoscopic gesture and state of the terminal device includes:
  • the first operation is different from the second operation.
  • the state of the terminal device may be an operating state of the terminal device, such as a state of the screen, a bright screen state, a video playing state, a game running state, a music playing state, etc., or may be a state of the environment in which the terminal device is located.
  • the illumination of the surrounding environment is lower than a certain illumination
  • the noise of the surrounding environment is lower than a certain degree, and the like, and may be the time period of the current time.
  • the video playback state is the state in which the video is played.
  • the game running state is the state in which the game is running.
  • the music playing state is the state in which music is played.
  • the first application scenario and the second application scenario are different application scenarios.
  • the first application scenario is a state of the information screen
  • the second application scenario is a state of a bright screen.
  • the first application scenario is a scenario in which the illumination of the surrounding environment is lower than a certain illumination
  • the second application scenario is a scenario in which the illumination of the periodic environment is not lower than the illumination. It can be understood that the same stereoscopic gesture is different in the operation performed by the terminal device in different application scenarios.
  • the terminal device determines the operation corresponding to the detected stereoscopic gesture according to the state in which the terminal device is located, and is simple to implement.
  • the status of the terminal device includes a first application scenario or a second application scenario
  • the controlling the terminal device to perform a corresponding operation according to the detected stereoscopic gesture and state of the terminal device includes:
  • the first operation is the same as the second operation.
  • the first application scenario and the second application scenario correspond to the same operation.
  • the terminal device may perform the same operation after detecting the same stereo gesture. For example, in some application scenarios, including a video call, playing a mobile game, watching a video, etc., after detecting the stereoscopic gesture of shaking the mobile phone left and right, the terminal device performs an operation of intercepting the screen. That is to say, the terminal device can perform a screen capture operation in multiple application scenarios. It can be understood that the user completes the stereo gesture in two or more application scenarios, and the terminal device performs the same operation.
  • the terminal device after detecting the stereo gesture operation in multiple application scenarios, performs the same operation to meet the requirement that the user performs the operation corresponding to the stereo gesture in different scenarios, thereby improving the user experience.
  • the method before the controlling the terminal device to perform a corresponding operation according to the detected stereoscopic gesture and state of the terminal device, the method further includes:
  • the terminal device detects and saves a correspondence between a stereoscopic gesture set by the user and an operation performed by the terminal device.
  • the user can set the correspondence between the stereo gesture and the operation performed by the terminal device, and the operation is simple, and can meet the personalized requirement of the user.
  • the status of the terminal device is included in a first time period or a second time period
  • the controlling the terminal device to perform a corresponding operation includes:
  • the first operation is different from the second operation.
  • the first time period and the second time period may be time periods that do not overlap in time.
  • the first time period is 8:00 to 10:00
  • the second time period is 11:00 to 18:00.
  • the stereoscopic gesture may correspond to two or more operations, and the terminal device may perform different operations after detecting the stereoscopic gesture in different time periods.
  • the terminal device determines the required execution operation according to the time period in which the stereoscopic gesture is performed, and can implement a plurality of different functions through one stereo gesture, and the operation is simple.
  • controlling the terminal device to perform a corresponding operation according to the detected stereoscopic gesture and state of the terminal device includes:
  • the target character may be various characters such as "C”, “O”, “L”, “+”, and the like.
  • the operation corresponding to the target character may be: intercepting the interface displayed by the terminal device; adjusting the screen brightness of the terminal device; adjusting the volume of the terminal device; or playing the terminal device.
  • the song at the beginning of the target character in the music playlist or the song of the artist at the beginning of the target character; the target application on the terminal device may be activated or deactivated; other operations may be performed, which are not limited in the embodiment of the present invention.
  • the terminal device determines the operation to be performed according to the motion trajectory of the terminal device, and the operation is simple and the accuracy is high; and the multiple operations can be performed quickly.
  • the stereoscopic gesture is an action of shaking the amplitude of the terminal device to exceed a first threshold and shaking a frequency of the terminal device to exceed a second threshold; or
  • the action of the terminal device; the controlling the terminal device to perform a corresponding operation includes:
  • the above stereo gesture is an action of shaking the terminal device. Since the stereo terminal is often limited by the first threshold and the second threshold in actual use, the erroneously triggered shaking motion can be detected.
  • the action of flipping the terminal device may be an action of flipping the angle of the terminal device beyond a target angle.
  • a certain function can be quickly implemented by flipping or shaking the terminal device, and the operation is simple.
  • controlling the terminal device to perform a corresponding operation according to the detected stereoscopic gesture and state of the terminal device includes:
  • the terminal device When the terminal device detects the first stereoscopic gesture and is in a bright screen state, controlling the terminal device to perform a screen capture operation;
  • the terminal device detects the fourth stereoscopic gesture and the illuminance of the environment is less than the first illuminance, controlling the terminal device to activate the flashlight function;
  • the terminal device detects the fifth stereoscopic gesture and the illuminance of the environment is greater than the second illuminance, the terminal device is controlled to activate the photographing function.
  • the terminal device determines, according to the detected stereoscopic gesture and the state of the terminal device, an operation that the terminal device needs to perform; the control operation can be quickly implemented, and the operation efficiency is high.
  • the first operation is to start a first application
  • the second operation is to start a second application
  • the first state is a bright screen state
  • the second state is a visible screen state
  • the first operation is a screen capture operation
  • the second operation is a brightness adjustment operation
  • the first state is a display game interface
  • the second state is a display video interface.
  • the terminal device is in a different state, and the user performs different operations performed by the same stereoscopic gesture terminal device, and different functions can be implemented through one stereo gesture, thereby improving operation efficiency.
  • the method before the controlling the terminal device to perform a corresponding operation according to the detected stereoscopic gesture and state of the terminal device, the method further includes:
  • the training data is collected, where the training data is N motion data corresponding to the N reference stereo gestures, and the N reference stereo gestures respectively correspond to the stereo gesture;
  • the training data is trained by using a neural network algorithm to obtain a recognition model corresponding to the stereo gesture
  • the detecting a stereo gesture includes:
  • the stereo gesture is determined according to the recognition model.
  • the terminal device may perform training on the training data by using a deep learning algorithm, a machine learning algorithm, or the like to obtain a recognition model corresponding to the stereo gesture.
  • the terminal device collects the training data, and determines the stereo gesture and the recognition model corresponding to the training data; on the one hand, the recognition model corresponding to the stereo gesture can be quickly established; on the other hand, the user can quickly set the stereo
  • the operation corresponding to the gesture and the stereo gesture is simple.
  • the method further includes:
  • the action data corresponding to the stereo gesture can be combined with the existing training data to obtain new training data.
  • the terminal device trains the new training data to obtain a new recognition model, that is, an updated recognition model.
  • the terminal device updates the recognition model according to the action data corresponding to the stereo gesture, and the recognition model may be optimized to improve the probability that the recognition model correctly recognizes the stereo gesture.
  • an embodiment of the present invention provides a terminal device, including:
  • a first detecting unit configured to detect a stereoscopic gesture, where the stereoscopic gesture is related to a motion track of the terminal device in a stereo space;
  • a second detecting unit configured to detect a state of the terminal device
  • control unit configured to control the terminal device to perform a corresponding operation according to the detected stereo gesture and state of the terminal device.
  • the terminal device determines, according to the detected stereoscopic gesture and the state of the terminal device, an operation that the terminal device needs to perform; the control operation can be quickly implemented, and the operation efficiency is high.
  • the status of the terminal device includes a first application scenario or a second application scenario
  • the control unit is configured to: control the terminal device to perform a first operation according to the detected stereoscopic gesture and the first application scenario; or specifically, according to the detected stereoscopic gesture and the second application a scenario of controlling the terminal device to perform a second operation; the first operation being different from the second operation.
  • the terminal device determines the operation corresponding to the detected stereoscopic gesture according to the state in which it is located; the problem of erroneously triggering the stereoscopic gesture can be solved, and the implementation is simple.
  • the status of the terminal device includes a first application scenario or a second application scenario
  • the control unit is configured to: control the terminal device to perform a first operation according to the detected stereoscopic gesture and the first application scenario; or specifically, according to the detected stereoscopic gesture and the second application a scenario, the terminal device is controlled to perform a second operation; the first operation is the same as the second operation.
  • the terminal device detects that the stereoscopic gesture operation performs the same operation in multiple application scenarios, and can meet the requirement that the user performs the operation corresponding to the stereoscopic gesture in different scenarios, thereby improving the user experience.
  • the first detecting unit is further configured to detect a stereoscopic gesture set by a user; the terminal device further includes:
  • a storage unit configured to save a correspondence between a stereoscopic gesture set by the user and an operation performed by the terminal device.
  • the user can set the correspondence between the stereo gesture and the operation performed by the terminal device, and the operation is simple, and can meet the personalized requirement of the user.
  • the status of the terminal device is included in a first time period or a second time period
  • the control unit is specifically configured to: when the terminal device is in the first time period, control the terminal device to perform a first operation; or specifically, when the terminal device is in the second time period In case, the terminal device is controlled to perform a second operation; the first operation is different from the second operation.
  • the terminal device determines the required execution operation according to the time period in which the stereoscopic gesture is performed, and can implement a plurality of different functions through one stereo gesture, and the operation is simple.
  • the first detecting unit is specifically configured to obtain a target character corresponding to a motion track of the terminal device, and obtain an operation corresponding to the target character according to a state of the terminal device;
  • the control unit is specifically configured to control the terminal device to perform an operation corresponding to the target character.
  • the terminal device determines the operation to be performed according to the motion trajectory of the terminal device, and the operation is simple and the accuracy is high; and the multiple operations can be performed quickly.
  • the stereoscopic gesture is an action of shaking the amplitude of the terminal device to exceed a first threshold and shaking a frequency of the terminal device to exceed a second threshold; or The action of the terminal device;
  • the control unit is specifically configured to intercept an interface displayed by the terminal device
  • a certain function can be quickly implemented by flipping or shaking the terminal device, and the operation is simple.
  • control unit is specifically configured to control the terminal device to perform a screen capture operation when the terminal device detects the first stereo gesture and is in a bright screen state;
  • the terminal device when the terminal device detects the second stereoscopic gesture and is in a music playing state, controlling the terminal device to perform a volume adjustment operation;
  • the terminal device when the terminal device detects the third stereoscopic gesture and is in a video playing state, controlling the terminal device to perform a brightness adjustment operation;
  • the terminal device when the terminal device detects the fourth stereoscopic gesture and the illuminance of the environment is less than the first illuminance, controlling the terminal device to activate the flashlight function;
  • the terminal device when the terminal device detects the fifth stereoscopic gesture and the illuminance of the environment is greater than the second illuminance, the terminal device is controlled to start the photographing function.
  • the terminal device determines, according to the detected stereoscopic gesture and the state of the terminal device, an operation that the terminal device needs to perform; the control operation can be quickly implemented, and the operation efficiency is high.
  • the first operation is to start the first application, and the second operation is to start the second application;
  • the first application scenario is a bright screen state, and the second application scenario is a status;
  • the first operation is a screen capture operation
  • the second operation is a brightness adjustment operation
  • the first application scenario is a display game interface
  • the second application scenario is a display video interface.
  • the terminal device is in a different state, and the user performs different operations performed by the same stereoscopic gesture terminal device, and different functions can be implemented through one stereo gesture, thereby improving operation efficiency.
  • the terminal device further includes:
  • a receiving unit configured to receive a stereo gesture collection instruction
  • the collecting unit is configured to collect training data, where the training data is N motion data corresponding to the N reference stereoscopic gestures, and the N reference stereoscopic gestures respectively correspond to the stereoscopic gesture;
  • the receiving unit is further configured to receive a setting instruction, and set an operation corresponding to the stereo gesture according to the setting instruction;
  • the first detecting unit is specifically configured to determine the stereo gesture according to the recognition model.
  • the terminal device collects the training data, and determines the stereo gesture and the recognition model corresponding to the training data; on the one hand, the recognition model corresponding to the stereo gesture can be quickly established; on the other hand, the user can quickly set the stereo
  • the operation corresponding to the gesture and the stereo gesture is simple.
  • the terminal device further includes:
  • an updating unit configured to update the recognition model by using motion data corresponding to the stereo gesture.
  • the terminal device updates the recognition model according to the action data corresponding to the stereo gesture, and the recognition model may be optimized to improve the probability that the recognition model correctly recognizes the stereo gesture.
  • an embodiment of the present invention provides another terminal device, including a processor and a memory, where the processor and the memory are connected to each other, wherein the memory is used to store a computer program, and the computer program includes program instructions.
  • the processor is configured to invoke the program instructions to perform the method of the first aspect above.
  • an embodiment of the present invention provides a computer readable storage medium, where the computer storage medium stores a computer program, where the computer program includes program instructions, and the program instructions, when executed by a processor, cause the processing The method of the first aspect described above is performed.
  • FIG. 1 is a schematic flowchart of a method for controlling a terminal device according to an embodiment of the present invention
  • FIG. 2 is a schematic diagram of a drawing O stereo gesture according to an embodiment of the present invention.
  • FIG. 3 is a schematic diagram of a C stereoscopic gesture according to an embodiment of the present invention.
  • FIG. 4 is a schematic diagram of a stereo gesture of a shaking terminal device according to an embodiment of the present invention.
  • FIG. 5 is a schematic diagram of a stereo gesture setting interface according to an embodiment of the present disclosure.
  • FIG. 6 is a schematic diagram of a stereo gesture adding interface according to an embodiment of the present invention.
  • FIG. 7 is a schematic diagram of human-computer interaction according to an embodiment of the present invention.
  • FIG. 8 is a schematic diagram of setting a C stereoscopic gesture according to an embodiment of the present invention.
  • FIG. 9 is a schematic diagram of setting a C-dimensional gesture according to another embodiment of the present invention.
  • FIG. 10 is a schematic diagram of an operation of setting a C stereoscopic gesture corresponding to different scenes according to an embodiment of the present disclosure
  • FIG. 11 is a schematic diagram of an operation of setting a C stereoscopic gesture corresponding to different time segments according to an embodiment of the present disclosure
  • FIG. 12 is a schematic diagram of a recognition result interface according to an embodiment of the present invention.
  • FIG. 13 is a schematic flowchart of a screen capture method according to an embodiment of the present invention.
  • FIG. 14 is a schematic structural diagram of a terminal device according to an embodiment of the present disclosure.
  • FIG. 15 is a schematic structural diagram of a terminal device according to another embodiment of the present invention.
  • the terminal device in the present application may be a mobile phone, a tablet computer, a wearable device, a personal digital assistant, or the like.
  • the terminal device can detect the acceleration, the angular velocity, the moving direction, and the like by the sensor to obtain the motion state data, that is, the motion data; and according to the obtained motion state data, the motion track, the posture change, and the like of the terminal device can be determined.
  • the above motion state data may be location data of the terminal device at different time points.
  • Table 1 is the motion data detected by the picture O stereo gesture collected by the terminal device.
  • the three data in each row of Table 1 indicate the spatial position of the terminal device at a time point, that is, corresponding to the space rectangular coordinate system. Coordinate points. It can be understood that the motion data corresponding to different stereo gestures is different, and various stereoscopic gestures that have been set or preset can be identified by analyzing the detected motion data.
  • the main inventive principle of the present application may include: the user presets a stereoscopic gesture required by the user; the terminal device establishes the recognition model of the stereoscopic gesture according to the collected motion data; and when the stereoscopic gesture is detected, performs the stereoscopic gesture corresponding to the stereoscopic gesture. Operation.
  • the user can set a dedicated stereo gesture according to his own needs, and set the operation corresponding to each stereo gesture to meet the needs of different users. That is to say, the user can set some stereo gestures, and the corresponding control operations are quickly realized by completing these stereo gestures, and the operation is simple.
  • each user's operating habits are different, different user handheld terminal devices complete the same gesture action, and the similarity of the motion data collected by the terminal device is low; one user handheld terminal device completes the same gesture action, the terminal device The similarity of the collected motion data is high. Therefore, the recognition model can be established according to the collected motion data, and the stereoscopic gesture can be accurately recognized by using the recognition model.
  • FIG. 1 is a schematic flowchart diagram of a method for controlling a terminal device according to an embodiment of the present invention. As shown in Figure 1, the method includes:
  • Detect a stereo gesture where the stereo gesture is related to a motion trajectory of the terminal device in a stereo space;
  • the stereoscopic gesture refers to an operation in which the handheld terminal device moves the above-described terminal device or changes the posture of the above-described terminal device.
  • the user handheld terminal device performs an action of moving the trajectory to O, that is, drawing an O-dimensional gesture.
  • the user handheld terminal device performs an action of moving the motion track C, that is, drawing a C stereoscopic gesture.
  • the user shakes the action of the terminal device, that is, shakes the stereo gesture of the terminal device.
  • the terminal device can detect the acceleration, the angular velocity, the moving direction, and the like by using a sensor such as a gravity sensor or a gyroscope to obtain motion state data; and the stereoscopic gesture can be determined according to the obtained motion state data.
  • a sensor such as a gravity sensor or a gyroscope
  • the terminal device can detect various stereoscopic gestures preset or set by the terminal device. For example, the O stereo gesture, the C stereo gesture, and the stereo gesture of the shaking terminal device shown in FIG. 2 to FIG. 4 .
  • Table 1 shows the motion data of the O-dimensional gesture in the stereoscopic space. According to the data terminal device in Table 1, the O-dimensional gesture can be recognized.
  • the terminal device may be preset with a recognition model corresponding to the stereoscopic gesture, and the stereoscopic gesture may be identified by using the recognition model.
  • the terminal device is preset with a first stereoscopic gesture and a recognition model corresponding to the first stereo gesture, and the terminal device inputs the collected motion data corresponding to the first stereo gesture to the recognition model to obtain the first Three-dimensional gestures.
  • the terminal device presets five stereo gestures and a recognition model corresponding to the five stereo gestures, and the terminal device inputs the collected motion data to the recognition model to obtain a stereo gesture corresponding to the motion data.
  • the user can set the stereo gestures required by the user and the operations corresponding to the stereo gestures.
  • the specific setting process is described in detail in the following embodiments.
  • the process of the terminal device detecting the stereoscopic gesture in the embodiment of the present invention may include:
  • the terminal device collects action data
  • the terminal device collects motion data through a target sensor.
  • the above target sensor may be at least one of a gyroscope, a gravity sensor, and the like.
  • the terminal device uses the gyroscope to collect 100 sets of motion data per second, as shown in Table 1, each row representing a set of data.
  • the network device transmits the action data to the processor, and the processor identifies the tag corresponding to the action data by using the recognition model.
  • the above identification model is a recognition model established by the terminal device according to the collected training data.
  • the subsequent embodiment will detail the establishment of the recognition model.
  • each preset or set stereoscopic gesture corresponds to a different tag.
  • the above terminal device may be preset with some data of disorderly action operations, and each of the messy action operations corresponds to the same tag.
  • the terminal device can quickly and accurately detect the stereo gesture.
  • the state of the terminal device may be an operating state of the terminal device, such as a state of the screen, a bright screen state, a video playing state, a game running state, a music playing state, etc., or may be a state of the environment in which the terminal device is located, such as a surrounding
  • the illumination of the environment is lower than a certain illumination, the noise of the surrounding environment is lower than a certain degree, etc.; it may also be the time period in which the current time is.
  • the video playback state is the state in which the video is played.
  • the game running state is the state in which the game is running.
  • the music playing state is the state in which music is played.
  • the detecting the status of the terminal device may be detecting an application scenario of the terminal device, such as a game running scenario, a video playing scenario, and the like.
  • the terminal device may be preset with a corresponding relationship between the stereoscopic gesture and the operation that the terminal device needs to perform the stereoscopic gesture; and the operation corresponding to the stereo gesture may be obtained according to the correspondence.
  • the terminal device is preset with a target correspondence table, where the target correspondence table includes first to sixth operations corresponding to the first stereoscopic gesture to the sixth stereoscopic gesture; the terminal device determines the second according to the collected motion data. After the stereo gesture, perform the second operation.
  • the terminal device may adjust the operation corresponding to the stereoscopic gesture, and may also delete the stereoscopic gesture. For example, a certain stereo gesture corresponds to the first operation, and the user can adjust the operation corresponding to the stereo gesture to the second operation.
  • the above terminal device detects the same stereoscopic gesture in different states, and the operations to be performed may be different.
  • the status of the terminal device includes a first application scenario or a second application scenario
  • the controlling the terminal device to perform the corresponding operation according to the detected stereoscopic gesture and state of the terminal device includes:
  • the first operation described above is different from the second operation described above.
  • the first application scenario and the second application scenario are different application scenarios.
  • the first application scenario is a state of the information screen
  • the second application scenario is a bright screen state.
  • the same stereoscopic gesture is different in the operation performed by the terminal device in different application scenarios.
  • the terminal device detects the stereoscopic gesture of the panning terminal device and intercepts the screen; in the application scenario in which the video is played, the terminal device detects the stereoscopic gesture of the panning terminal device and adjusts the brightness of the screen.
  • the terminal device detects that the stereoscopic gesture of the terminal device is shaken left and right, and increases the brightness of the screen.
  • the terminal device detects that the stereoscopic gesture of the terminal device is shaken up and down, and reduces the brightness of the screen. For example, in the state of the information screen, the terminal device detects a stereo gesture with the motion track being “O”, and starts the flashlight application; the terminal device detects a stereo gesture with the motion track “0” in the bright screen state. Launch the camera app.
  • the terminal device determines the operation corresponding to the detected stereoscopic gesture according to the state in which the terminal device is located, and is simple to implement.
  • the status of the terminal device includes a first application scenario or a second application scenario
  • the controlling the terminal device to perform the corresponding operation according to the detected stereoscopic gesture and state of the terminal device includes:
  • the first operation described above is the same as the second operation described above.
  • the terminal device may perform the same operation after detecting the same stereo gesture.
  • the terminal device detects a stereo gesture with the motion track being J
  • the memo application is started.
  • the terminal device detects the stereoscopic gesture of the terminal device and intercepts the screen. It can be understood that the user performs a certain stereo gesture in two or more application scenarios, and the terminal device performs the same operation.
  • the terminal device after detecting the stereoscopic gesture operation in multiple application scenarios, performs the same operation to meet the requirement that the user performs the operation corresponding to the stereoscopic gesture in different scenarios, thereby improving the user experience.
  • the status of the terminal device is in a first time period or a second time period
  • the controlling the terminal device to perform the corresponding operation according to the detected stereoscopic gesture and state of the terminal device;
  • the first operation described above is different from the second operation described above.
  • the first time period and the second time period may be time periods that do not overlap in time.
  • the first time period is 8:00 to 10:00
  • the second time period is a time period other than 8:00 to 10:00 every day.
  • the above stereo gestures can correspond to different operations in different time periods.
  • the stereoscopic gesture corresponds to the first operation in the first time period; and corresponds to the second operation in the second time period.
  • the user can set each time period corresponding to each stereo gesture and the operation corresponding to each time period.
  • the terminal device determines a required execution operation according to the time period in which the terminal device is located, and can implement a plurality of different functions through one stereo gesture, and the operation is simple.
  • the terminal device determines an operation required to be performed by the terminal device according to the detected stereoscopic gesture and the state of the terminal device; the control operation can be quickly implemented, and the operation efficiency is high.
  • the method before the controlling the terminal device to perform a corresponding operation according to the detected stereoscopic gesture and state of the terminal device, the method further includes:
  • the terminal device detects and saves a correspondence between a stereoscopic gesture set by the user and an operation performed by the terminal device.
  • the user can set the correspondence between the stereo gesture and the operation performed by the terminal device, and the operation is simple, and meets the personalized requirements of different users.
  • the user Before performing the 101 in FIG. 1 , the user can set the stereo gesture required by himself.
  • the following provides a specific example of setting the stereo gesture, which may include:
  • the setting interface is the interface corresponding to the setting icon on the desktop of the terminal device.
  • the terminal device displays the setting interface.
  • the terminal device displays a stereo gesture setting interface.
  • the terminal device displays a stereo gesture setting interface
  • FIG. 5 exemplarily shows a schematic diagram of a stereo gesture setting interface.
  • the stereo gesture setting interface may include a stereo gesture switch control 501, a pico stereo gesture switch control 502, and an add stereo gesture interface 503, and the stereo gesture switch control 501 is configured to receive an operation of turning on or off a stereo gesture input by a user.
  • the O-dimensional gesture switch control 502 is configured to receive an operation of turning on or off the O-dimensional gesture input by the user
  • the stereo gesture interface 503 is configured to receive an operation of adding a stereo gesture input by the user.
  • the buttons in the stereo gesture switch control 501 are buttons that control the activation and deactivation of the stereo gesture function.
  • the button in the stereo gesture switch control 501 is slid to the right, the stereo gesture function is activated, that is, after an open stereo gesture is detected, the operation corresponding to the stereo gesture is performed; otherwise, the stereo gesture function is turned off, that is, the detection is performed. No corresponding operations are performed until any stereo gesture.
  • the picture O stereo gesture in FIG. 5 is a stereo gesture preset by the terminal device or set by the user.
  • the added stereo gesture interface 503 in FIG. 5 displays a stereo gesture addition interface after receiving a click operation by the user.
  • the stereo gesture interface displays a preset or set stereo gesture
  • FIG. 1 is only an example. The examples are merely illustrative of the embodiments of the invention and should not be construed as limiting.
  • the receiving the stereoscopic gesture adding instruction may be detecting a click operation for adding a stereoscopic gesture interface to the stereoscopic gesture setting interface. As shown in FIG. 5, after the user clicks the add stereo gesture interface 503, the terminal device displays a stereo gesture addition interface.
  • FIG. 6 exemplarily shows a schematic diagram of a stereo gesture adding interface.
  • the dotted line in the figure represents the trajectory of the stereo gesture detected by the terminal device; 601 is the return interface, and after the user clicks the return interface, the stereo gesture setting interface is displayed; 602 is the cancel interface, and the user clicks the cancel interface.
  • the terminal device cancels the detected stereoscopic gesture; 603 is an input interface. After the user clicks the input interface, the terminal device stores the detected stereoscopic gesture, and stores action data corresponding to the stereo gesture.
  • the motion data corresponding to the acquired stereoscopic gesture may be an operation data collected by the sensor such as a gyroscope.
  • the terminal device displays the stereoscopic gesture adding interface
  • the user performs a C-dimensional gesture as shown in FIG. 3, and the terminal device displays the trajectory of the C-dimensional gesture, and the dotted line in FIG. 6 indicates the drawing C.
  • the trajectory of the stereo gesture In a practical application, after the terminal device displays the stereoscopic gesture adding interface, the user performs the stereoscopic gesture by holding the terminal device, and the terminal device detects the stereoscopic gesture and displays the trajectory of the stereoscopic gesture. After the user completes the stereo gesture, if the user clicks the cancel interface, the motion data corresponding to the detected stereo gesture is deleted, and the user's stereo gesture is detected.
  • the detected stereo gesture and the The motion data corresponding to the stereo gesture is started, and the user's stereo gesture is re-detected; if the user clicks on the return interface, the stereo gesture setting interface is returned.
  • the terminal device can train the stored motion data corresponding to each stereo gesture to obtain a recognition model of each stereo gesture, so as to accurately recognize the stereo gesture set by the user.
  • the detecting the motion data input command may be a click operation for detecting an input interface in the stereoscopic gesture adding interface.
  • 603 in FIG. 6 is an input interface, and the terminal device detects a click operation on the input interface, that is, an action data input command is detected.
  • the above training model may be a training model based on a neural network. In an actual application, after the terminal device displays the stereo gesture adding interface, the user handheld terminal device performs a stereoscopic gesture, and after completing the stereo gesture, clicks on the input interface; the terminal device inputs the motion data collected this time into the training. The terminal device uses the training model to train the input motion data to obtain a recognition model corresponding to the stereo gesture.
  • the timer in the terminal device starts timing and collects action data; if the user does not complete a stereo gesture before the time duration of the timer reaches the time threshold, The terminal device cancels the action data of the collection and displays a stereo gesture setting interface.
  • the above time threshold may be 2 seconds, 3 seconds, 5 seconds, or the like.
  • the terminal device restarts acquiring the motion data, and inputs the collected motion data to the training model after receiving the motion data command input. That is, the user can input multiple training data.
  • the terminal device collects corresponding motion data, and inputs the collected motion data into the training model.
  • the receiving return command may be a click operation on the return interface.
  • FIG. 7 exemplarily shows a schematic diagram of human-computer interaction. As shown in FIG. 7 , when the user inputs a stereo gesture with a track C and clicks back to the interface 701 , the stereo gesture setting interface displayed by the terminal device includes a C stereo gesture switch control 702 , that is, a newly added stereo gesture switch control. In this way, users can set their own stereoscopic gestures to meet the individual needs of users.
  • the above-mentioned target stereoscopic gesture is a preset or set stereoscopic gesture.
  • the setting operation of receiving the target stereoscopic gesture may be to receive a click operation for the target stereoscopic gesture.
  • FIG. 8 exemplarily shows a schematic diagram of setting a C-dimensional gesture. As shown in FIG. 8 , each stereoscopic gesture included in the stereo gesture setting interface is a startup interface, and a stereo gesture is clicked to enter a setting interface corresponding to the stereo gesture; after the user clicks to draw the C stereo gesture, the terminal device displays the painting C.
  • the setting interface of the stereo gesture is a C stereo gesture switch control
  • 802 is an interface for the name setting, through which the user can set the name corresponding to the C stereo gesture, such as "paint C screenshot", "paint C--camera”, etc.
  • 803 is an operation setting interface, through which the user can set an operation corresponding to the C stereo gesture, such as a screen capture operation, a startup target application, etc.
  • FIG. 9 exemplarily shows another schematic diagram of setting a C stereoscopic gesture.
  • the terminal device displays a setting interface for drawing a C stereo gesture;
  • 901 is a name setting interface, through which the user can set a name corresponding to the C stereo gesture;
  • 902 is an operation setting interface. Through the interface, the user can set the operation corresponding to the C stereo gesture, such as screen capture operation, start target application, etc.;
  • 903 is an optimized interface, after clicking the interface, the user performs a C stereo gesture operation, and the terminal device stores the detected action.
  • FIG. 10 exemplarily shows a schematic diagram of an operation of setting a C stereoscopic gesture corresponding to different scenes.
  • the user clicks the operation to add the interface 1001, the first state setting column 1002, the second state setting column 1003, the first operation setting interface 1004, and the second operation setting interface are displayed in the stereo gesture setting interface displayed by the terminal device. 1005.
  • the two scenes corresponding to the C stereoscopic gesture that is, the first scene and the second scene, can be set through the two state setting fields.
  • the two operations corresponding to the C stereo gesture can be set through the two operation setting interfaces.
  • a scene corresponds to the operation of the first operation setting interface setting
  • the second scene corresponds to the operation of the second operation setting interface setting
  • the operation is added to the interface 1006, a status setting column and its corresponding operation setting interface are added.
  • the first scenario after the terminal device detects the C stereoscopic gesture, the first operation is performed to set the interface setting operation; in the second scenario, after the terminal device detects the C stereoscopic gesture, the second operation setting interface setting is performed. operating. It can be understood that the user can set a stereo gesture corresponding to different operations in different scenarios.
  • FIG. 11 exemplarily shows a schematic diagram of an operation of setting a C stereoscopic gesture corresponding to different time periods.
  • the first state setting column 1102 , the second state setting column 1103 , the first operation setting interface 1104 , and the second operation setting interface are displayed in the stereo gesture setting interface displayed by the terminal device. 1105.
  • the two time segments corresponding to the C stereoscopic gesture that is, the first time segment and the second time segment, can be set through the two state setting fields; the two interfaces corresponding to the C stereo gesture can be set through the two operation setting interfaces.
  • the first time period corresponds to the operation of the first operation setting interface setting
  • the second time period corresponds to the operation of the second operation setting interface setting
  • the terminal device detects the C stereoscopic gesture
  • the terminal device performs the operation of setting the interface setting by the first operation
  • the terminal device performs the second operation setting interface setting after the second time period. operating. It can be understood that the user can set a stereo gesture corresponding to different operations in different time periods.
  • the user can set a stereoscopic gesture and a corresponding relationship between the stereoscopic gesture and the operation performed by the terminal device, and the operation is simple, and can meet the personalized requirement of the user.
  • controlling the terminal device to perform the corresponding operation according to the detected stereoscopic gesture and state of the terminal device includes:
  • the target character corresponding to the motion track of the terminal device may be determined by first determining the motion track of the detected stereo gesture according to the collected motion data; and determining the target character corresponding to the motion track. Different motion trajectories correspond to different characters.
  • the motion track in FIG. 2 corresponds to the character "O”
  • the motion track in FIG. 3 corresponds to the character "C”.
  • the obtaining the corresponding target character according to the state of the terminal device may be that the first operation corresponding to the target character is obtained when the terminal device is in the first state, and the target character is obtained when the terminal device is in the second state. Corresponding second operation; the first operation described above is different from the second operation.
  • the terminal device detects a stereoscopic gesture in the form of an "O" letter clockwise, and activates the flashlight application; in the bright screen state, the terminal device detects a stereoscopic "O" letter in the form of a clock. Gesture, launch the camera app. It can be understood that the stereoscopic gesture with the motion track being “O” corresponds to the operation of starting the flashlight application in the state of the interest screen; in the bright screen state, the operation of the camera application is activated.
  • the terminal device when detecting a stereo gesture with a motion track of “U”, the terminal device increases the volume; when detecting a stereo gesture with the track “D”, the terminal device adjusts Low volume.
  • the terminal device detects a stereo gesture with a motion track of “Z”, and the terminal device switches the played song to a song beginning with “Z” or beginning with “Z”. The song of the entertainer.
  • the terminal device detects a stereo gesture with a motion track of "L”, and initiates a social application such as WeChat.
  • the terminal device displays an interface of a certain social application
  • the terminal device detects a stereo gesture with the motion track being “L”, and closes the social application.
  • the foregoing is only an example provided by the embodiment of the present invention.
  • the embodiment of the present invention does not limit the motion track of the stereo gesture and the operation corresponding to the stereo gesture.
  • the terminal device determines the operation to be performed according to the motion trajectory of the terminal device, and the operation is simple and the accuracy is high; and the multiple operations can be performed quickly.
  • the stereoscopic gesture is an action of shaking the amplitude of the terminal device that exceeds a first threshold and shaking the frequency of the terminal device by a second threshold; or the stereo gesture is an action of flipping the terminal device.
  • the above stereo gesture is an action of shaking the terminal device.
  • the first threshold may be 0.5 cm, 1 cm, 2 cm, 3 cm, or the like.
  • the second threshold may be 5 times per second, 3 times per second, once every 3 seconds, and the like.
  • the stereoscopic gesture is limited by the first threshold and the second threshold, so that the stereo gesture of shaking the terminal device can be prevented from being triggered by mistake. For example, when the user handheld terminal device shakes the terminal device left and right, the terminal device increases the brightness of the screen; when the user handheld terminal device shakes the terminal device up and down, the terminal device turns down the brightness of the screen.
  • the target application may be a camera application, a payment application, a social application, a short message application, a calendar application, a mailbox application, a reading application, etc., which are not limited in the embodiment of the present invention.
  • the user starts the mailbox application on the terminal device by shaking the terminal device; after shaking the terminal device again, the terminal device closes the mailbox application.
  • the above operation of inverting the terminal device may be an operation of inverting the angle of the terminal device beyond a target angle.
  • the above target angles may be 30°, 45°, 60°, 90°, 180°, and the like.
  • the angle of the flip is 90°.
  • the user can take a screen shot by shaking the terminal device.
  • a user can launch an application by flipping the terminal device.
  • the terminal device detects the flipping action and closes the running video playing program.
  • the terminal device detects the flipping action and rejects the incoming call.
  • the above-mentioned interception of the interface displayed by the terminal device, adjusting the screen brightness of the terminal device, and starting the target application on the terminal device are only specific examples of the control operation corresponding to the control command, and the control command can implement other control in the embodiment of the present invention. operating.
  • a certain function can be quickly implemented by flipping or shaking the stereoscopic gesture of the terminal device, and the operation is simple.
  • the first operation is to start the first application
  • the second operation is to start the second application
  • the first application scenario is a bright screen state
  • the second application scenario is a polyscreen state
  • the first operation is a screen capture operation
  • the second operation is a brightness adjustment operation
  • the first application scenario is a display game interface
  • the second application scenario is a display video interface.
  • the first application and the second application described above are different applications.
  • the first application described above may be a reading application, a mailbox application, a calendar application, or the like.
  • the second application described above may be a camera application, a photo album application, a map application, or the like.
  • the embodiment of the present invention does not limit the first application, the second application, the first application scenario, and the second application scenario.
  • one stereoscopic gesture may correspond to three or more operations, and the number of operations corresponding to one stereo gesture is not limited in the embodiment of the present invention.
  • the terminal device detects the first stereoscopic gesture in the state of the screen, activates the calendar application, detects the first stereo gesture in the bright screen state, and starts the map application.
  • the terminal device when the user uses the terminal device to play the game, the terminal device is shaken, and the terminal device intercepts the current game interface; when the user plays the video using the terminal device, the terminal device is shaken, and the terminal device adjusts the video interface. Brightness.
  • the terminal device is in a different state, and the user performs different operations performed by the same stereoscopic gesture terminal device, and different functions can be implemented through one stereo gesture, thereby improving operation efficiency.
  • the method before the controlling the terminal device to perform a corresponding operation according to the detected stereoscopic gesture and state of the terminal device, the method further includes:
  • the training data is collected, where the training data is N motion data corresponding to the N reference stereo gestures, and the N reference stereo gestures respectively correspond to the stereo gesture;
  • the training data is trained by using a neural network algorithm to obtain a recognition model corresponding to the above stereo gesture
  • the above detecting stereo gestures includes:
  • the stereoscopic gesture is determined according to the above recognition model.
  • the training data is the action data that the user collected by the terminal device performs the above-mentioned three-dimensional gesture by the user holding the terminal device.
  • the user can complete the stereo gesture by the handheld terminal device after inputting the stereo gesture acquisition instruction; the terminal device collects corresponding motion data and uses the neural network algorithm to collect the gesture.
  • the obtained action data is trained to obtain a recognition model corresponding to the action data; the terminal device can identify the stereo gesture by using the recognition model.
  • the receiving, by the terminal device, the stereo gesture collection instruction may be an operation of detecting that the user clicks on the stereo gesture adding interface. As shown in FIG. 5 and FIG.
  • the terminal device displays a gesture adding interface.
  • the terminal device collects N motion data corresponding to the N times reference stereoscopic gesture, and obtains the training data. For example, when the terminal device displays the gesture adding interface shown in FIG. 6, the user performs a C gesture operation, and the terminal device collects the training data corresponding to the C gesture operation.
  • the terminal device trains the training data to obtain a recognition model corresponding to the stereo gesture.
  • the training data is identified by using the recognition model corresponding to the stereoscopic gesture, and the recognition result is obtained.
  • the recognition result indicates the stereoscopic gesture corresponding to the training data; and the recognition result is displayed.
  • FIG. 12 exemplarily shows a schematic diagram of a recognition result interface, in which 1201 is a first interface, 1202 is a second interface, and 1203 is a recognition result.
  • the terminal device may perform training on the training data by using other algorithms such as a deep learning algorithm and a machine learning algorithm to obtain the foregoing recognition model.
  • the process of the above terminal device training the training data to obtain the recognition model corresponding to the stereoscopic gesture is as follows;
  • the above terminal device inputs training data into the training model
  • the above training model can adopt a three-layer neural network.
  • the number of nodes in the input layer can be 300, the number of hidden layer nodes can be 15, 20, 25, etc., and the number of nodes in the output layer is 3.
  • the training data includes at least 10 sets of data corresponding to the same stereo gesture, and each row in Table 1 represents a set of data.
  • the foregoing training model may use other neural networks, which are not limited in the embodiment of the present invention.
  • 70% of the data in the training data is used as training data, and another 30% of the data is used as verification data.
  • cross-training methods can be used: 70% of the training data is selected for training each time; the accuracy can be continuously improved through training until the accuracy of the recognition model reaches 90% or more.
  • the training is resumed after the training time exceeds 1 minute.
  • the recognition model with the largest recognition rate is saved, and the user is fed back to continue to input the stereo gesture, or the stereo gesture is discarded.
  • multiple recognition models can be trained, and the recognition model with the highest recognition rate is selected as the final recognition model.
  • the terminal device can quickly establish an identification model corresponding to the training data.
  • the operation corresponding to the stereoscopic gesture may be set in the manner of the foregoing specific example of setting a stereo gesture.
  • the terminal device collects the training data, and determines the stereo gesture and the recognition model corresponding to the training data; on the one hand, the recognition model corresponding to the stereo gesture can be quickly established; on the other hand, the user can quickly set the stereo gesture and
  • the operation corresponding to the stereo gesture is simple.
  • the method further includes:
  • the recognition model is updated by the motion data corresponding to the stereo gesture described above.
  • the updating the recognition model by using the motion data may be performed by training the motion data and the existing training data into a training model to obtain a new recognition model; and the existing training data is training data corresponding to the recognition model. It can be understood that the more the training data, the higher the accuracy of the recognition model to recognize the stereo gesture.
  • the above recognition model can be continuously optimized by using the above action data. That is to say, the probability that the recognition model correctly recognizes the stereo gesture will become higher and higher.
  • the recognition model can be further optimized by using the action data, and the implementation is simple.
  • An embodiment of the present invention provides a screen capture method, as shown in FIG. 13, which may include:
  • detecting a stereo gesture wherein the stereo gesture is related to a motion trajectory of the terminal device in a stereo space;
  • the specific implementation is the same as 101 in FIG.
  • the stereoscopic gesture is a stereo gesture of shaking the terminal device.
  • the specific implementation is the same as 102 in FIG.
  • the above terminal device is in a bright screen state.
  • Control the terminal device to perform a screen capture operation according to the detected stereo gesture and state of the terminal device, and obtain a screenshot
  • the terminal device after detecting that the stereoscopic gesture of the terminal device is shaken, the terminal device performs a screen capture operation, that is, intercepts an interface currently displayed on the screen of the terminal device.
  • the above prompt information may be “Flip terminal device, keep screenshot”.
  • the determining whether the stereoscopic gesture of flipping the terminal device is detected may be determining whether a stereoscopic gesture of flipping the terminal device is detected within a preset duration.
  • the preset duration may be 3 seconds, 5 seconds, 10 seconds, or the like.
  • the above screenshots can be stored to store the above screenshots into an album, a gallery, and the like.
  • the user can intercept the interface displayed by the terminal device by shaking the terminal device to obtain a screenshot; if the terminal device is flipped within the preset time period, the screenshot is stored; otherwise, the screenshot is deleted.
  • Users can also perform screen capture operations through other stereo gestures. For example, in the case that the terminal device plays the video, the user holds the terminal device to draw a semicircle counterclockwise, and the terminal device performs a screen capture operation, obtains a screenshot, and displays a prompt message of “flip the terminal device and retain the screenshot”; the user Flip the terminal device and store the screenshot.
  • the screen capture operation can be completed quickly, and the operation is simple.
  • FIG. 14 is a functional block diagram of a terminal device according to an embodiment of the present invention.
  • the functional blocks of the terminal device may implement the inventive solution by hardware, software or a combination of hardware and software.
  • the functional blocks depicted in Figure 14 can be combined or separated into several sub-blocks to implement the inventive arrangements. Accordingly, the above description of the invention may support any possible combination or separation or further definition of the functional modules described below.
  • the terminal device may include:
  • a first detecting unit 1401, configured to detect a stereoscopic gesture, where the stereoscopic gesture is related to a motion track of the terminal device in a stereo space;
  • a second detecting unit 1402 configured to detect a status of the terminal device
  • the control unit 1403 is configured to control the terminal device to perform a corresponding operation according to the detected stereo gesture and state of the terminal device.
  • the status of the terminal device includes a first application scenario or a second application scenario
  • the control unit 1403 is configured to control the terminal device to perform a first operation according to the detected stereoscopic gesture and the first application scenario, or specifically, to control the terminal according to the detected stereoscopic gesture and the second application scenario.
  • the device performs a second operation; the first operation described above is different from the second operation described above.
  • the status of the terminal device includes a first application scenario or a second application scenario
  • the control unit 1403 is configured to control the terminal device to perform a first operation according to the detected stereoscopic gesture and the first application scenario, or specifically, to control the terminal according to the detected stereoscopic gesture and the second application scenario.
  • the device performs a second operation; the first operation described above is the same as the second operation described above.
  • the first detecting unit 1701 is further configured to detect a stereoscopic gesture set by a user; the terminal device further includes:
  • the storage unit 1404 is configured to save a correspondence between a stereoscopic gesture set by the user and an operation performed by the terminal device.
  • the status of the terminal device is in a first time period or a second time period
  • the control unit 1403 is configured to: when the terminal device is in the foregoing first time period, control the terminal device to perform a first operation; or specifically, to control, when the terminal device is in the second time period, The terminal device performs the second operation; the first operation is different from the second operation.
  • the foregoing first detecting unit 1401 is specifically configured to obtain a target character corresponding to a motion track of the terminal device, and obtain an operation corresponding to the target character according to the state of the terminal device;
  • the control unit 1403 is specifically configured to control the terminal device to perform an operation corresponding to the target character.
  • the stereoscopic gesture is an action of shaking the amplitude of the terminal device that exceeds a first threshold and shaking the frequency of the terminal device by a second threshold; or the stereo gesture is an action of flipping the terminal device. ;
  • the control unit 1403 is specifically configured to intercept an interface displayed by the terminal device.
  • control unit 1403 is configured to: when the terminal device detects the first stereo gesture and is in a bright screen state, and controls the terminal device to perform a screen capture operation; the first stereo gesture corresponds to the foregoing Screen capture operation;
  • the terminal device when the terminal device detects the second stereo gesture and is in a music playing state, controlling the terminal device to perform a volume adjustment operation;
  • the terminal device when the terminal device detects the third stereoscopic gesture and is in a video playing state, controlling the terminal device to perform a brightness adjustment operation;
  • the terminal device when the terminal device detects the fourth stereoscopic gesture and the illumination of the environment is less than the first illuminance, controlling the terminal device to activate the flashlight function;
  • the terminal device detects the fifth stereoscopic gesture and the illuminance of the environment is greater than the second illuminance, the terminal device is controlled to activate the photographing function.
  • the first operation is to start the first application
  • the second operation is to start the second application
  • the first application scenario is a bright screen state
  • the second application scenario is a polyscreen state
  • the first operation is a screen capture operation
  • the second operation is a brightness adjustment operation
  • the first application scenario is a display game interface
  • the second application scenario is a display video interface.
  • the foregoing terminal device further includes:
  • the receiving unit 1405 is configured to receive a stereo gesture collection instruction.
  • the collecting unit 1406 is configured to collect training data, where the training data is N motion data corresponding to the N reference stereoscopic gestures, and the N reference stereoscopic gestures all correspond to the stereoscopic gesture;
  • the receiving unit 1405 is further configured to receive a setting instruction, and set an operation corresponding to the stereo gesture according to the setting instruction;
  • the first detecting unit is specifically configured to determine the stereoscopic gesture according to the foregoing recognition model.
  • the foregoing terminal device further includes:
  • the updating unit 1407 is configured to update the recognition model by using motion data corresponding to the stereo gesture.
  • FIG. 15 is a schematic block diagram of a terminal device according to another embodiment of the present invention.
  • the terminal device in this embodiment may include: one or more processors 1501; one or more input devices 1502, one or more output devices 1503, and a memory 1504.
  • the above-described processor 1501, input device 1502, output device 1503, and memory 1504 are connected by a bus 1505.
  • the memory 1502 is for storing a computer program, the computer program includes program instructions, and the processor 1501 is configured to execute program instructions stored in the memory 1502.
  • the processor 1501 is configured to invoke the foregoing program instruction to: detect a stereoscopic gesture, the stereoscopic gesture is related to a motion track of the terminal device in a stereo space; and detect a state of the terminal device; according to the detected stereoscopic gesture of the terminal device Status, controlling the above terminal device to perform corresponding operations.
  • the processor 1501 may be a central processing unit (CPU), and the processor may also be another general-purpose processor, a digital signal processor (DSP). , Application Specific Integrated Circuit (ASIC), Field-Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware component, etc.
  • the general purpose processor may be a microprocessor or the processor or any conventional processor or the like.
  • the processor 1501 described above can implement the functions of the control unit 1403, the second detecting unit 1402, and the updating unit 1407 as shown in FIG.
  • the processor 1501 can also implement other data processing functions and control functions in the foregoing method embodiments.
  • the input device 1502 may include a touch panel, a fingerprint sensor (for collecting fingerprint information of the user and direction information of the fingerprint), a microphone, a gravity sensor, a gyroscope, and the like, and the output device 1503 may include a display (LCD or the like), a speaker, and the like.
  • the above gravity sensor is used to detect acceleration
  • the above gyroscope is used to detect angular velocity.
  • the above input device 1502 can implement the functions of the first detecting unit 1501, the receiving unit 1405, and the collecting unit 1406 as shown in FIG. Specifically, the input device 1502 can receive an instruction sent by the user through the touch panel; and acquire motion data through a gravity sensor, a gyroscope, or the like.
  • the memory 1504 can include read only memory and random access memory and provides instructions and data to the processor 1501. A portion of the memory 1504 may also include a non-volatile random access memory. For example, the memory 1504 can also store information of the device type. The above memory 1504 can implement the functions of the storage unit 1404 as shown in FIG.
  • the processor 1501, the input device 1502, the output device 1503, and the memory 1504 described in the embodiments of the present invention may implement the implementation manner described in the control method of the terminal device provided by the embodiment of the present invention, and may also perform the present invention.
  • the implementation manner of the terminal device described in the embodiment is not described herein again.
  • a computer readable storage medium stores a computer program, where the computer program includes program instructions, and when the program instructions are executed by the processor, the action data is collected.
  • the action data is motion state data of the terminal device; detecting a stereo gesture, the stereo gesture is related to a motion track of the terminal device in a stereo space; detecting a state of the terminal device; and controlling according to the detected stereo gesture and state of the terminal device.
  • the above computer readable storage medium may be an internal storage unit of the above-described apparatus of any of the foregoing embodiments, such as a hard disk or a memory of the device.
  • the computer readable storage medium may also be an external storage device of the above device, such as a plug-in hard disk equipped with the above device, a smart memory card (SMC), a Secure Digital (SD) card, a flash memory card. (Flash Card), etc.
  • the above computer readable storage medium may also include both an internal storage unit of the above device and an external storage device.
  • the computer readable storage medium described above is for storing the above computer program and other programs and data required by the above apparatus.
  • the computer readable storage medium described above can also be used to temporarily store data that has been output or is about to be output.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

一种终端设备的控制方法、终端设备及计算机可读存储介质,该方法包括:检测立体手势,所述立体手势与终端设备在立体空间的运动轨迹相关(101);检测所述终端设备的状态(102);根据检测的所述终端设备的立体手势和状态,控制所述终端设备执行相应的操作(103),可以快速地实现控制操作,操作效率较高。

Description

终端设备的控制方法、终端设备及计算机可读介质 技术领域
本发明涉及电子技术领域,尤其涉及终端设备的控制方法、终端设备及计算机可读介质。
背景技术
随着网络技术以及多媒体技术的飞速发展,手机等终端设备得到广泛的应用。终端设备具备的功能越来越多,例如拍照功能、手电筒功能、截屏功能、录像功能等。
当前,人们使用终端设备的某个功能的一般方式是通过该功能对应的应用图标启动该功能。采用这种方式,人们往往需要先点亮屏幕,然后解锁屏幕,最后查找对应的图标并点击。举例来说,某个用户想要终端设备的拍照功能,该用户需要依次执行如下操作:点亮屏幕、解锁屏幕、查找相机应用、启动相机应用。这种方式操作复杂,花费的时间较长。另外,人们也可以通过同时按压终端设备上的两个按键来实现终端设备的某个功能。举例来说,用户同时按住音量调低键和电源键,并保持几秒,可以完成截屏操作。由于需要用户同时按压两个按键,该用户很可能需要重复按压几次才能完成截屏操作。可见,采用这种方式操作失败的概率较高,花费时间较长。
上述技术方案存在的缺陷在于,操作复杂、花费的时间较长。
发明内容
本发明实施例提供了一种终端设备的控制方法、终端设备及计算机可读介质,操作简单、准确率较高,可满足用户的个性化需求。
第一方面,本发明实施例提供了一种终端设备的控制方法,该方法包括:
检测立体手势,所述立体手势与终端设备在立体空间的运动轨迹相关;
检测所述终端设备的状态;
根据检测的所述终端设备的立体手势和状态,控制所述终端设备执行相应的操作。
立体手势是指手持终端设备移动所述终端设备的操作或改变所述终端设备的姿态的操作。所述终端设备可以利用重力传感器、位移传感器、陀螺仪、姿态传感器等检测立体手势。所述终端设备可以配置有所述立体手势和所述终端设备执行的操作的对应关系;依据所述对应关系和所述立体手势可以确定所述终端设备所需执行的操作。
可选的,在控制所述终端设备执行相应的操作之前,显示提示界面,所述提示界面中的提示信息提示是否执行所述操作;若在预定时长内未检测到拒绝执行所述操作的指令,执行所述操作。所述提示界面可包含确认选项和拒绝选项;所述终端设备在检测到对所述确认选项的点击操作后,执行所述操作;所述终端设备在检测到对所述拒绝选项的点击操作后,拒绝执行所述操作。
本发明实施例中,终端设备依据检测到的立体手势和所述终端设备的状态确定所述终 端设备所需执行的操作;可以快速地实现控制操作,操作效率较高。
在一种可选的实现方式中,所述终端设备的状态包括第一应用场景或第二应用场景;
所述根据检测的所述终端设备的立体手势和状态,控制所述终端设备执行相应的操作包括:
根据检测的所述立体手势和所述第一应用场景,控制所述终端设备执行第一操作;
根据检测的所述立体手势和所述第二应用场景,控制所述终端设备执行第二操作;
所述第一操作不同于所述第二操作。
所述终端设备的状态可以是所述终端设备的运行状态,例如息屏状态、亮屏状态、视频播放状态、游戏运行状态、音乐播放状态等;也可以是所述终端设备所处环境的状态,例如周围环境的照度低于某个照度、周围环境的噪声低于一定程度等;还可以是当前时刻所处时间段。视频播放状态是播放视频的状态。游戏运行状态是运行游戏的状态。音乐播放状态是播放音乐的状态。所述第一应用场景和所述第二应用场景为不同的应用场景。举例来说,第一应用场景为息屏状态,第二应用场景为亮屏状态。又举例来说,第一应用场景为周围环境的照度低于某个照度的场景,第二应用场景为周期环境的照度不低于该照度的场景。可以理解,相同的立体手势在不同的应用场景下,终端设备执行的操作不同。
本发明实施例中,终端设备根据其所处的状态确定检测到的立体手势对应的操作,实现简单。
在一种可选的实现方式中,所述终端设备的状态包括第一应用场景或第二应用场景;
所述根据检测的所述终端设备的立体手势和状态,控制所述终端设备执行相应的操作包括:
根据检测的所述立体手势和所述第一应用场景,控制所述终端设备执行第一操作;
根据检测的所述立体手势和所述第二应用场景,控制所述终端设备执行第二操作;
所述第一操作与所述第二操作相同。
所述第一应用场景和第二应用场景对应相同的操作。本发明实施例中,在不同应用场景下,终端设备可以在检测到同一个立体手势后,执行相同的操作。举例来说,在某些应用场景下,包括视频通话、打手机游戏、观看视频等,终端设备检测到左右摇动手机的立体手势后,执行截取屏幕的操作。也就是说,终端设备可以在多个应用场景下执行截屏操作。可以理解,用户在两个或两个以上应用场景下完成所述立体手势,终端设备执行相同的操作。
本发明实施例中,终端设备在多个应用场景下检测到立体手势操作后,执行相同的操作,可满足用户在不同场景下执行所述立体手势对应的操作的需求,提高用户体验。
在一种可选的实现方式中,在根据检测的所述终端设备的立体手势和状态,控制所述终端设备执行相应的操作之前,所述方法还包括:
所述终端设备检测并保存用户设置的立体手势与所述终端设备执行的操作的对应关系。
本发明实施例中,用户可设置立体手势与终端设备执行的操作的对应关系,操作简单, 可以满足用户的个性化需求。
在一种可选的实现方式中,所述终端设备的状态包括处于第一时间段或第二时间段;
所述控制所述终端设备执行相应的操作包括;
在所述终端设备处于所述第一时间段的情况下,控制所述终端设备执行第一操作;
在所述终端设备处于所述第二时间段的情况下,控制所述终端设备执行第二操作;
所述第一操作不同于所述第二操作。
所述第一时间段和所述第二时间段可以为时间上不重叠的时间段。举例来说,第一时间段为8:00~10:00,第二时间段为11:00~18:00。所述立体手势可以对应两个或两个以上操作,所述终端设备在不同时间段检测到所述立体手势后,可以执行不同的操作。
本发明实施例中,在立体手势对应至少两个操作的情况下,终端设备根据所处的时间段确定所需执行操作,可以通过一个立体手势实现多个不同的功能,操作简单。
在一种可选的实现方式中,所述根据检测的所述终端设备的立体手势和状态,控制所述终端设备执行相应的操作包括:
获得所述终端设备的运动轨迹对应的目标字符;
根据所述终端设备的状态获得所述目标字符对应的操作;
控制所述终端设备执行所述目标字符对应的操作。
所述目标字符可以是“C”、“O”、“L”、“+”等各种字符。所述目标字符对应的操作可以是截取所述终端设备显示的界面;也可以是调节所述终端设备的屏幕亮度;还可以是调节所述终端设备的音量;还可以是播放所述终端设备的音乐播放列表中所述目标字符开头的歌曲或者所述目标字符开头的艺人的歌曲;还可以是启动或关闭所述终端设备上的目标应用;还可以是其他操作,本发明实施例不作限定。
本发明实施例中,终端设备根据终端设备的运动轨迹确定所需执行的操作,操作简单、准确率较高;可快速地执行多种操作。
在一种可选的实现方式中,所述立体手势为摇动所述终端设备的幅度超过第一阈值且摇动所述终端设备的频率超过第二阈值的动作;或者,所述立体手势为翻转所述终端设备的动作;所述控制所述终端设备执行相应的操作包括:
截取所述终端设备显示的界面;
或者,调节所述终端设备的屏幕亮度;
或者,启动或关闭所述终端设备上的目标应用。
可以理解,上述立体手势为摇动终端设备的动作。由于在实际使用中,经常发生摇动终端设备的情况,通过所述第一阈值和所述第二阈值对所述立体手势进行限定,可以检测出误触发的摇动动作。所述翻转所述终端设备的动作可以是将所述终端设备翻转的角度超过目标角度的动作。
本发明实施例中,可以通过翻转或摇动终端设备快速地实现某个功能,操作简单。
在一种可选的实现方式中,所述根据检测的所述终端设备的立体手势和状态,控制所述终端设备执行相应的操作包括:
在所述终端设备检测到第一立体手势且处于亮屏状态下,控制所述终端设备执行截屏 操作;
或者,在所述终端设备检测到第二立体手势且处于音乐播放状态下,控制所述终端设备执行音量调节操作;
或者,在所述终端设备检测到第三立体手势且处于视频播放状态下,控制所述终端设备执行亮度调节操作;
或者,在所述终端设备检测到第四立体手势且所处环境的照度小于第一照度的情况下,控制所述终端设备启动手电筒功能;
或者,在所述终端设备检测到第五立体手势且所处环境的照度大于第二照度的情况下,控制所述终端设备启动拍照功能。
本发明实施例中,终端设备依据检测到的立体手势和所述终端设备的状态确定所述终端设备所需执行的操作;可以快速地实现控制操作,操作效率较高。
在一种可选的实现方式中,所述第一操作为启动第一应用,第二操作为启动第二应用;所述第一状态为亮屏状态,所述第二状态为息屏状态;
或者,所述第一操作为截屏操作,所述第二操作为亮度调节操作,所述第一状态为显示游戏界面,所述第二状态为显示视频界面。
本发明实施例中,终端设备所处的状态不同,用户完成同一个立体手势终端设备执行的操作不同,通过一个立体手势可实现不同的功能,可提高操作效率。
在一种可选的实现方式中,所述根据检测的所述终端设备的立体手势和状态,控制所述终端设备执行相应的操作之前,所述方法还包括:
在接收到立体手势采集指令后,采集训练数据,所述训练数据为N个参考立体手势对应的N个动作数据,所述N个参考立体手势均对应所述立体手势;
采用神经网络算法对所述训练数据进行训练,得到所述立体手势对应的识别模型;
接收设置指令,依据所述设置指令设置所述立体手势对应的操作;
所述检测立体手势包括:
依据所述识别模型确定所述立体手势。
可选的,所述终端设备可以采用深度学习算法、机器学习算法等对所述训练数据进行训练,得到所述立体手势对应的识别模型。
本发明实施例中,终端设备采集训练数据,并确定所述训练数据对应的立体手势以及识别模型;一方面可以快速地建立所述立体手势对应的识别模型;另一方面用户可以快速地设置立体手势以及立体手势对应的操作,操作简单。
在一种可选的实现方式中,所述控制所述终端设备执行相应的操作之后,所述方法还包括:
利用所述立体手势对应的动作数据更新所述识别模型。
所述立体手势对应的动作数据可以与已有的训练数据进行合并,得到新的训练数据。所述终端设备对所述新的训练数据进行训练,得到新的识别模型,即更新后的识别模型。
本发明实施例中,终端设备根据立体手势对应的动作数据更新识别模型,可以优化所述识别模型,提高所述识别模型正确识别所述立体手势的概率。
第二方面,本发明实施例提供了一种终端设备,包括:
第一检测单元,用于检测立体手势,所述立体手势与终端设备在立体空间的运动轨迹相关;
第二检测单元,用于检测所述终端设备的状态;
控制单元,用于根据检测的所述终端设备的立体手势和状态,控制所述终端设备执行相应的操作。
本发明实施例中,终端设备依据检测到的立体手势和所述终端设备的状态确定所述终端设备所需执行的操作;可以快速地实现控制操作,操作效率较高。
在一种可选的实现方式中,所述终端设备的状态包括第一应用场景或第二应用场景;
所述控制单元,具体用于根据检测的所述立体手势和所述第一应用场景,控制所述终端设备执行第一操作;或具体用于根据检测的所述立体手势和所述第二应用场景,控制所述终端设备执行第二操作;所述第一操作不同于所述第二操作。
本发明实施例中,终端设备根据其所处的状态确定检测到的立体手势对应的操作;可以解决误触发立体手势的问题,实现简单。
在一种可选的实现方式中,所述终端设备的状态包括第一应用场景或第二应用场景;
所述控制单元,具体用于根据检测的所述立体手势和所述第一应用场景,控制所述终端设备执行第一操作;或具体用于根据检测的所述立体手势和所述第二应用场景,控制所述终端设备执行第二操作;所述第一操作与所述第二操作相同。
本发明实施例中,终端设备在多个应用场景下检测到立体手势操作执行相同的操作,可满足用户在不同场景下执行所述立体手势对应的操作的需求,提高用户体验。
在一种可选的实现方式中,所述第一检测单元,还用于检测用户设置的立体手势;所述终端设备还包括:
存储单元,用于保存用户设置的立体手势与所述终端设备执行的操作的对应关系。
本发明实施例中,用户可设置立体手势与终端设备执行的操作的对应关系,操作简单,可以满足用户的个性化需求。
在一种可选的实现方式中,所述终端设备的状态包括处于第一时间段或第二时间段;
所述控制单元,具体用于在所述终端设备处于所述第一时间段的情况下,控制所述终端设备执行第一操作;或具体用于在所述终端设备处于所述第二时间段的情况下,控制所述终端设备执行第二操作;所述第一操作不同于所述第二操作。
本发明实施例中,在立体手势对应至少两个操作的情况下,终端设备根据所处的时间段确定所需执行操作,可以通过一个立体手势实现多个不同的功能,操作简单。
在一种可选的实现方式中,所述第一检测单元,具体用于获得所述终端设备的运动轨迹对应的目标字符;根据所述终端设备的状态获得所述目标字符对应的操作;
所述控制单元,具体用于控制所述终端设备执行所述目标字符对应的操作。
本发明实施例中,终端设备根据终端设备的运动轨迹确定所需执行的操作,操作简单、准确率较高;可快速地执行多种操作。
在一种可选的实现方式中,所述立体手势为摇动所述终端设备的幅度超过第一阈值且摇动所述终端设备的频率超过第二阈值的动作;或者,所述立体手势为翻转所述终端设备的动作;
所述控制单元,具体用于截取所述终端设备显示的界面;
或者,具体用于调节所述终端设备的屏幕亮度;
或者,具体用于启动或关闭所述终端设备上的目标应用。
本发明实施例中,可以通过翻转或摇动终端设备快速地实现某个功能,操作简单。
在一种可选的实现方式中,所述控制单元,具体用于在所述终端设备检测到第一立体手势且处于亮屏状态下,控制所述终端设备执行截屏操作;
或者,具体用于在所述终端设备检测到第二立体手势且处于音乐播放状态下,控制所述终端设备执行音量调节操作;
或者,具体用于在所述终端设备检测到第三立体手势且处于视频播放状态下,控制所述终端设备执行亮度调节操作;
或者,具体用于在所述终端设备检测到第四立体手势且所处环境的照度小于第一照度的情况下,控制所述终端设备启动手电筒功能;
或者,具体用于在所述终端设备检测到第五立体手势且所处环境的照度大于第二照度的情况下,控制所述终端设备启动拍照功能。
本发明实施例中,终端设备依据检测到的立体手势和所述终端设备的状态确定所述终端设备所需执行的操作;可以快速地实现控制操作,操作效率较高。
在一种可选的实现方式中,所述第一操作为启动第一应用,第二操作为启动第二应用;所述第一应用场景为亮屏状态,所述第二应用场景为息屏状态;
或者,所述第一操作为截屏操作,所述第二操作为亮度调节操作;所述第一应用场景为显示游戏界面,所述第二应用场景为显示视频界面。
本发明实施例中,终端设备所处的状态不同,用户完成同一个立体手势终端设备执行的操作不同,通过一个立体手势可实现不同的功能,可提高操作效率。
在一种可选的实现方式中,所述终端设备还包括:
接收单元,用于接收立体手势采集指令;
采集单元,用于采集训练数据,所述训练数据为N个参考立体手势对应的N个动作数据,所述N个参考立体手势均对应所述立体手势;
所述接收单元,还用于接收设置指令,依据所述设置指令设置所述立体手势对应的操作;
所述第一检测单元,具体用于依据所述识别模型确定所述立体手势。
本发明实施例中,终端设备采集训练数据,并确定所述训练数据对应的立体手势以及识别模型;一方面可以快速地建立所述立体手势对应的识别模型;另一方面用户可以快速地设置立体手势以及立体手势对应的操作,操作简单。
在一种可选的实现方式中,所述终端设备还包括:
更新单元,用于利用所述立体手势对应的动作数据更新所述识别模型。
本发明实施例中,终端设备根据立体手势对应的动作数据更新识别模型,可以优化所述识别模型,提高所述识别模型正确识别所述立体手势的概率。
第三方面,本发明实施例提供了另一种终端设备,包括处理器和存储器,所述处理器和存储器相互连接,其中,所述存储器用于存储计算机程序,所述计算机程序包括程序指 令,所述处理器被配置用于调用所述程序指令,执行上述第一方面的方法。
第四方面,本发明实施例提供了一种计算机可读存储介质,所述计算机存储介质存储有计算机程序,所述计算机程序包括程序指令,所述程序指令当被处理器执行时使所述处理器执行上述第一方面的方法。
附图说明
图1是本发明实施例提供的一种终端设备的控制方法的流程示意图;
图2为本发明实施例提供的一种画O立体手势的示意图;
图3为本发明实施例提供的一种画C立体手势的示意图;
图4为本发明实施例提供的一种摇动终端设备的立体手势的示意图;
图5为本发明实施例提供的一种立体手势设置界面的示意图;
图6为本发明实施例提供的一种立体手势添加界面的示意图;
图7为本发明实施例提供的一种人机交互示意图;
图8为本发明实施例提供的一种设置画C立体手势的示意图;
图9为本发明另一实施例提供的一种设置画C立体手势的示意图;
图10为本发明实施例提供的一种设置画C立体手势在不同场景对应的操作的示意图;
图11为本发明实施例提供的设置画C立体手势在不同时间段对应的操作的示意图;
图12为本发明实施例提供的一种识别结果界面的示意图;
图13为本发明实施例提供的一种截屏方法的示意流程图;
图14为本发明实施例提供的一种终端设备的结构示意图;
图15为本发明另一实施例提供的一种终端设备的结构示意图。
具体实施方式
本发明的实施方式部分使用的术语仅用于对本发明的具体实施例进行解释,而非旨在限定本发明。
本申请中的终端设备可以是手机、平板电脑、可穿戴设备、个人数字助理等。终端设备可以通过传感器检测加速度、角速度、运动方向等,得到运动状态数据,即动作数据;根据得到的运动状态数据可以确定上述终端设备的运动轨迹、姿态变化等。上述运动状态数据可以是上述终端设备在不同时间点的位置数据。举例来说,表1为终端设备采集的画O立体手势检测到的运动数据,表1中每一行的3个数据表示该终端设备在一个时间点的空间位置,即在空间直角坐标系对应的坐标点。可以理解,不同立体手势对应的运动数据不同,通过分析检测到的运动数据可以识别出各种已设置的或预置的立体手势。
表1
X Y Z
-0.11256 0.033091 0.002339
-0.115 -0.06707 0.009669
-3.06304 0.009879 0.263789
-2.57556 0.521784 -0.06606
-0.2164 0.012322 -0.11615
-0.00749 -0.02431 0.012113
-0.13333 -0.01087 -0.3605
-0.05025 -0.27721 -1.5358
0.008378 -0.23323 -1.95608
0.101229 0.119834 -0.45335
0.054803 0.0392 -0.02452
0.051138 0.044087 -0.05873
本申请的主要发明原理可包括:用户预先设置自己所需的立体手势;终端设备依据采集到的动作数据建立上述立体手势的识别模型;在检测到上述立体手势的情况下,执行上述立体手势对应的操作。这样,用户可以根据自己的需要设置专属的立体手势,并设置各立体手势对应的操作,满足不同用户的需求。也就是说,用户可以设置一些立体手势,通过完成这些立体手势快速地实现相应的控制操作,操作简单。可以理解,每个用户的操作习惯不同,不同用户手持终端设备完成同一个手势动作,该终端设备采集到的动作数据的相似度较低;一个用户手持终端设备完成同一个手势动作,该终端设备采集到的动作数据的相似度较高。由此可以推出,依据采集到的动作数据建立识别模型,利用该识别模型可准确地识别出立体手势。
参见图1,图1是本发明实施例提供的一种终端设备的控制方法的流程示意图。如图1所示,该方法包括:
101、检测立体手势,上述立体手势与终端设备在立体空间的运动轨迹相关;
立体手势是指手持终端设备移动上述终端设备的操作或改变上述终端设备的姿态的操作。如图2所示,用户手持终端设备做运动轨迹为O的动作,即画O立体手势。如图3所示,用户手持终端设备做运动轨迹为C的动作,即画C立体手势。如图4所示,用户摇动终端设备的动作,即摇动终端设备的立体手势。终端设备可以通过重力传感器、陀螺仪等传感器检测其加速度、角速度、运动方向等,得到运动状态数据;根据得到的运动状态数据可以确定立体手势。可以理解,终端设备可以检测出上述终端设备预置的或已设置的各种立体手势。例如图2至图4所示的画O立体手势、画C立体手势以及摇动终端设备的立体手势。表1为画O立体手势在立体空间的运动数据,根据表1中的数据终端设备可以识别出画O立体手势。
上述终端设备可以预置有上述立体手势对应的识别模型,利用上述识别模型可以识别出上述立体手势。举例来说,终端设备预置有第一立体手势和该第一立体手势对应的识别模型,该终端设备将采集到的该第一立体手势对应的动作数据输入到该识别模型,得到上述第一立体手势。又举例来说,终端设备预置有5个立体手势和这5个立体手势对应的识别模型,该终端设备将采集到的动作数据输入到该识别模型,得到该动作数据对应的立体手势。用户可以设置自己所需的立体手势以及各立体手势对应的操作,具体设置过程在后 续实施例详细介绍。本发明实施例中终端设备检测立体手势的流程,可包括:
1)、终端设备采集动作数据;
上述终端设备通过目标传感器采集动作数据。上述目标传感器可以是陀螺仪、重力传感器等中的至少一种。例如,终端设备利用陀螺仪每秒采集100组动作数据,如表1所示,每一行表示一组数据。
2)、将上述动作数据输入到识别模型;
具体的,上述网络设备将上述动作数据传递给处理器,上述处理器通过上述识别模型识别出上述动作数据对应的标签。上述识别模型为上述终端设备根据采集到的训练数据建立的识别模型。后续实施例会详细介绍识别模型的建立方法。
3)、利用上述识别模型确定上述动作数据对应的标签;
在上述终端设备中,每个预置的或者已设置的立体手势对应一个不同的标签。上述终端设备中可以预置有一些杂乱的动作操作的数据,每个杂乱的动作操作都对应同一个标签。
4)、确定上述标签对应的立体手势。
通过执行上述步骤终端设备可以快速、准确地检测出立体手势。
102、检测上述终端设备的状态;
上述终端设备的状态可以是上述终端设备的运行状态,例如息屏状态、亮屏状态、视频播放状态、游戏运行状态、音乐播放状态等;也可以是上述终端设备所处环境的状态,例如周围环境的照度低于某个照度、周围环境的噪声低于一定程度等;还可以是当前时刻所处的时间段。视频播放状态是播放视频的状态。游戏运行状态是运行游戏的状态。音乐播放状态是播放音乐的状态。上述检测上述终端设备的状态可以是检测上述终端设备的应用场景,如游戏运行场景、视频播放场景等。
103、根据检测的上述终端设备的立体手势和状态,控制上述终端设备执行相应的操作。
上述终端设备可以预置有上述立体手势与上述终端设备检测到上述立体手势需要执行的操作的对应关系;根据上述对应关系可以获得上述立体手势对应的操作。举例来说,终端设备预置有目标对应表,该目标对应表包含第一立体手势到第六立体手势依次对应的第一操作到第六操作;该终端设备依据采集到的动作数据确定第二立体手势后,执行第二操作。本发明实施例中,终端设备可以调整上述立体手势对应的操作,也可以删除上述立体手势。举例来说,某个立体手势对应第一操作,用户可以将该立体手势对应的操作调整为第二操作。
上述终端设备在不同的状态下检测到相同的立体手势,所需执行的操作可能不同。
在一种可选的实现方式中,上述终端设备的状态包括第一应用场景或第二应用场景;
上述根据检测的上述终端设备的立体手势和状态,控制上述终端设备执行相应的操作包括:
根据检测的上述立体手势和上述第一应用场景,控制上述终端设备执行第一操作;
根据检测的上述立体手势和上述第二应用场景,控制上述终端设备执行第二操作;
上述第一操作不同于上述第二操作。
上述第一应用场景和上述第二应用场景为不同的应用场景。举例来说,第一应用场景 为息屏状态,第二应用场景为亮屏状态。可以理解,相同的立体手势在不同的应用场景下,终端设备执行的操作不同。举例来说,终端设备在运行游戏的应用场景下,检测到摇动终端设备的立体手势,截取屏幕;终端设备在播放视频的应用场景下,检测到摇动终端设备的立体手势,调节屏幕亮度。可选的,终端设备检测到左右摇动上述终端设备的立体手势,调高屏幕亮度;上述终端设备检测到上下摇动上述终端设备的立体手势,调低屏幕亮度。又举例来说,终端设备在息屏状态下,检测到运动轨迹为“O”的立体手势,启动手电筒应用;该终端设备在亮屏状态下,检测到运动轨迹为“O”的立体手势,启动相机应用。
本发明实施例中,终端设备根据其所处的状态确定检测到的立体手势对应的操作,实现简单。
在一种可选的实现方式中,上述终端设备的状态包括第一应用场景或第二应用场景;
上述根据检测的上述终端设备的立体手势和状态,控制上述终端设备执行相应的操作包括:
根据检测的上述立体手势和上述第一应用场景,控制上述终端设备执行第一操作;
根据检测的上述立体手势和上述第二应用场景,控制上述终端设备执行第二操作;
上述第一操作与上述第二操作相同。
本发明实施例中,在不同应用场景下,终端设备可以在检测到同一个立体手势后,执行相同的操作。举例来说,在任何应用场景中,终端设备检测到运动轨迹为J的立体手势后,启动备忘录应用。又举例来说,终端设备在视频通话场景、游戏运行场景、视频播放场景等场景下,检测到摇动终端设备的立体手势后截取屏幕。可以理解,用户在两个或两个以上应用场景下完成某个立体手势,终端设备执行相同的操作。
本发明实施例中,终端设备在多个应用场景下检测到立体手势操作后,执行相同的操作,可满足用户在不同场景下执行上述立体手势对应的操作的需求,提高用户体验。
在一种可选的实现方式中,上述终端设备的状态包括处于第一时间段或第二时间段;
上述根据检测的上述终端设备的立体手势和状态,控制上述终端设备执行相应的操作包括;
在上述终端设备处于上述第一时间段的情况下,控制上述终端设备执行第一操作;
在上述终端设备处于上述第二时间段的情况下,控制上述终端设备执行第二操作;
上述第一操作不同于上述第二操作。
上述第一时间段和上述第二时间段可以为时间上不重叠的时间段。举例来说,第一时间段为8:00~10:00,第二时间段为每天除8:00~10:00以外的时间段。上述立体手势在不同时间段可以对应不同的操作。具体的,上述立体手势在上述第一时间段对应上述第一操作;在上述第二时间段对应上述第二操作。在实际应用中,用户可以设置各立体手势对应的各时间段以及各时间段对应的操作。
本发明实施例中,在一个立体手势对应至少两个操作的情况下,终端设备根据所处的时间段确定所需执行操作,可以通过一个立体手势实现多个不同的功能,操作简单。
本发明实施例中,终端设备依据检测到的立体手势和上述终端设备的状态确定上述终 端设备所需执行的操作;可以快速地实现控制操作,操作效率较高。
在一种可选的实现方式中,在根据检测的上述终端设备的立体手势和状态,控制上述终端设备执行相应的操作之前,上述方法还包括:
上述终端设备检测并保存用户设置的立体手势与上述终端设备执行的操作的对应关系。
本发明实施例中,用户可设置立体手势与终端设备执行的操作的对应关系,操作简单,满足不同用户的个性化需求。
在执行图1中的101之前,用户可以设置自己所需的立体手势,下面提供了一种设置立体手势的具体举例,可包括:
1)启动设置界面中立体手势的启动接口;
设置界面为终端设备桌面上的设置图标对应的界面。实际应用中,用户点击设置图标后,终端设备显示设置界面。用户点击该设置界面中的立体手势对应的启动接口后,终端设备显示立体手势设置界面。
2)终端设备显示立体手势设置界面;
图5示例性的示出了立体手势设置界面的示意图。如图5所示,立体手势设置界面可包括立体手势开关控件501、画O立体手势开关控件502以及添加立体手势接口503,立体手势开关控件501用于接收用户输入的开启或关闭立体手势的操作,画O立体手势开关控件502用于接收用户输入的开启或关闭画O立体手势的操作,添加立体手势接口503用于接收用户输入的添加立体手势的操作。立体手势开关控件501中的按钮为控制立体手势功能启动和关闭的按钮。若立体手势开关控件501中的按钮被滑动到右边时,则启动立体手势功能,即检测到某个开启的立体手势后,执行该立体手势对应的操作;否者,关闭立体手势功能,即检测到任何立体手势均不执行其对应的操作。图5中的画O立体手势为终端设备预置的或用户已设置的立体手势。图5中的添加立体手势接口503接收到用户的点击操作后,显示立体手势添加界面。立体手势界面显示预置的或已设置的立体手势,图1仅是一种示例。示例仅仅用于解释本发明实施例,不应构成限定。
3)接收立体手势添加指令;
上述接收立体手势添加指令可以是检测到对上述立体手势设置界面中添加立体手势接口的点击操作。如图5所示,用户点击添加立体手势接口503后,终端设备显示立体手势添加界面。
4)显示立体手势添加界面,并采集立体手势对应的动作数据;
图6示例性的示出了一种立体手势添加界面的示意图。如图6所示,图中的虚线表示终端设备检测到的立体手势的轨迹;601为返回接口,用户点击该返回接口后,显示立体手势设置界面;602为取消接口,用户点击该取消接口后,终端设备取消检测到的立体手势;603为输入接口,用户点击该输入接口后,终端设备存储检测到的立体手势,并存储该立体手势对应的动作数据。上述采集立体手势对应的动作数据可以是利用陀螺仪等传感器采集上述终端设备的动作数据。举例来说,终端设备在显示立体手势添加界面的情况下,用户执行如图3所示的画C立体手势,该终端设备显示该画C立体手势的轨迹,图6中的虚线表示该画C立体手势的轨迹。在实际应用中,终端设备显示立体手势添加界面后,用 户手持该终端设备执行某个立体手势,该终端设备检测该立体手势并显示该立体手势的轨迹。在用户完成该立体手势后,若用户点击取消接口,则删除检测到的立体手势对应的动作数据,并开始重新检测用户的立体手势;若用户点击输入接口,则存储检测到的立体手势以及该立体手势对应的动作数据,并开始重新检测用户的立体手势;若用户点击返回接口,则返回立体手势设置界面。上述终端设备可以将存储的各立体手势对应的动作数据进行训练,得到各立体手势的识别模型,以便于准确地识别出用户设置的立体手势。
5)在检测到动作数据输入指令后,将采集到的动作数据输入到训练模型;
上述检测到动作数据输入指令可以是检测到对上述立体手势添加界面中输入接口的点击操作。图6中的603为输入接口,终端设备检测到对该输入接口的点击操作,即检测到动作数据输入指令。上述训练模型可以为基于神经网络建立的训练模型。在实际应用中,在终端设备显示立体手势添加界面后,用户手持终端设备执行某个立体手势,在完成该立体手势后,点击输入接口;该终端设备将本次采集到的动作数据输入到训练模型;该终端设备利用该训练模型对输入的动作数据进行训练,得到该立体手势对应的识别模型。可选的,终端设备在显示立体手势添加界面后,该终端设备中的定时器开始计时,并采集动作数据;若在该定时器的计时时长达到时间阈值之前,用户未完成某一立体手势,该终端设备取消此次采集的动作数据,并显示立体手势设置界面。上述时间阈值可以是2秒、3秒、5秒等。
可选的,终端设备接收到上述动作数据输入指令后,重新开始采集动作数据,并在下一次接收到动作数据指令输入后将采集到的动作数据输入到上述训练模型。也就是说,用户可以输入多个训练数据。在实际应用中,用户在点击输入接口后,可以继续执行多次立体手势,终端设备采集对应的动作数据,将采集到的动作数据输入到训练模型。
6)利用上述训练模型对上述动作数据进行训练,得到上述动作数据对应的识别模型;
具体的训练过程在后续实施例详细介绍。
7)接收返回指令,显示更新后的立体手势设置界面。
上述接收返回指令可以是接收到对上述返回接口的点击操作。图7示例性的示出了一种人机交互示意图。如图7所示,当用户输入轨迹为C的立体手势,并点击返回接口701后,终端设备显示的立体手势设置界面中包含画C立体手势开关控件702,即新添加的立体手势开关控件。通过这种方式,用户可以设置自己专属的立体手势,满足用户的个性化需求。
8)接收针对目标立体手势的设置操作,显示上述目标立体手势的设置界面;
上述目标立体手势为预置的或已设置的立体手势。上述接收针对目标立体手势的设置操作可以是接收到针对上述目标立体手势的点击操作。图8示例性的示出了一种设置画C立体手势的示意图。如图8所示,立体手势设置界面包含的每个立体手势均为一个启动接口,点击一个立体手势,可进入该立体手势对应的设置界面;用户点击画C立体手势后,终端设备显示画C立体手势的设置界面;801为画C立体手势开关控件;802为名称设置接口,通过该接口用户可以设置画C立体手势对应的名称,例如“画C截屏”、“画C--camera”等;803为操作设置接口,通过该接口用户可以设置画C立体手势对应的操作,如截屏操作、启动目标应用等;804为优化接口,点击该接口后,用户执行画C立体手势操作,该 终端设备存储检测到的动作数据,并优化画C立体手势对应的识别模型;805为删除接口,点击该接口后,删除画C立体手势。
图9示例性的示出了另一种设置画C立体手势的示意图。如图9所示,用户点击画C立体手势后,终端设备显示画C立体手势的设置界面;901为名称设置接口,通过该接口用户可以设置画C立体手势对应的名称;902为操作设置接口,通过该接口用户可以设置画C立体手势对应的操作,如截屏操作、启动目标应用等;903为优化接口,点击该接口后,用户执行画C立体手势操作,该终端设备存储检测到的动作数据,并优化画C立体手势对应的识别模型;904为删除接口,点击该接口后,删除画C立体手势;905为操作添加接口,用户点击该操作添加接口后,终端设备增加一个操作设置接口以及两个状态设置栏。示例仅仅是本发明实施例的一种实现方式,实际应用中可以通过其他方式设置立体手势的名称、对应的操作以及其他信息。
图10示例性的示出了设置画C立体手势在不同场景对应的操作的示意图。如图10所示,用户点击操作添加接口1001后,终端设备显示的立体手势设置界面中显示第一状态设置栏1002、第二状态设置栏1003、第一操作设置接口1004以及第二操作设置接口1005;通过这两个状态设置栏可以设置画C立体手势对应的两个场景,即第一场景和第二场景;通过这两个操作设置接口可以设置画C立体手势对应的两个操作,第一场景对应第一操作设置接口设置的操作,第二场景对应第二操作设置接口设置的操作;点击操作添加接口1006后,增加一个状态设置栏以及其对应的操作设置接口。在第一场景下,终端设备检测到画C立体手势后,执行第一操作设置接口设置的操作;在第二场景下,终端设备检测到画C立体手势后,执行第二操作设置接口设置的操作。可以理解,用户可以设置一个立体手势在不同场景下对应不同的操作。
图11示例性的示出了设置画C立体手势在不同时间段对应的操作的示意图。如图11所示,用户点击操作添加接口1101后,终端设备显示的立体手势设置界面中显示第一状态设置栏1102、第二状态设置栏1103、第一操作设置接口1104以及第二操作设置接口1105;通过这两个状态设置栏可以设置画C立体手势对应的两个时间段,即第一时间段和第二时间段;通过这两个操作设置接口可以设置画C立体手势对应的两个操作,第一时间段对应第一操作设置接口设置的操作,第二时间段对应第二操作设置接口设置的操作;点击操作添加接口1106后,增加一个状态设置栏以及其对应的操作设置接口。在第一时间段,终端设备检测到画C立体手势后,执行第一操作设置接口设置的操作;在第二时间段,终端设备检测到画C立体手势后,执行第二操作设置接口设置的操作。可以理解,用户可以设置一个立体手势在不同时间段对应不同的操作。
本发明实施例中,用户可设置立体手势以及立体手势与终端设备执行的操作的对应关系,操作简单,可以满足用户的个性化需求。
在一种可选的实现方式中,上述根据检测的上述终端设备的立体手势和状态,控制上述终端设备执行相应的操作包括:
获得上述终端设备的运动轨迹对应的目标字符;
根据上述终端设备的状态获得上述目标字符对应的操作;
控制上述终端设备执行上述目标字符对应的操作。
上述获得上述终端设备的运动轨迹对应的目标字符可以是先根据采集的动作数据确定检测的立体手势的运动轨迹;再确定上述运动轨迹对应的目标字符。不同的运动轨迹对应的字符不同。图2中的运动轨迹对应字符“O”,图3中的运动轨迹对应字符“C”。上述根据上述终端设备的状态获得上述目标字符对应的操作可以是在上述终端设备处于第一状态下,获得上述目标字符对应的第一操作;在上述终端设备处于第二状态下,获得上述目标字符对应的第二操作;上述第一操作不同于第二操作。
举例来说,终端设备在息屏状态下,检测到顺时针做“O”字母形式的立体手势,启动手电筒应用;终端设备在亮屏状态下,检测到顺时针做“O”字母形式的立体手势,启动相机应用。可以理解,运动轨迹为“O”的立体手势,在息屏状态下,对应启动手电筒应用的操作;在亮屏状态下,对应启动相机应用的操作。又举例来说,在音乐播放场景或视频播放场景下,检测到运动轨迹为“U”的立体手势时,该终端设备提高音量;检测到轨迹为“D”的立体手势时,该终端设备调低音量。又举例来说,终端设备在播放音乐列表中的音乐的情况下,检测到运动轨迹为“Z”的立体手势,该终端设备将播放的歌曲切换到“Z”开头的歌曲或者“Z”开头的艺人的歌曲。又举例来说,终端设备检测到运动轨迹为“L”的立体手势,启动某个社交应用如微信。又举例来说,终端设备在显示某个社交应用的界面的情况下,检测到运动轨迹为“L”的立体手势,关闭该社交应用。上述仅为本发明实施例提供的举例,本发明实施例不限定立体手势的运动轨迹以及立体手势对应的操作。
本发明实施例中,终端设备根据终端设备的运动轨迹确定所需执行的操作,操作简单、准确率较高;可快速地执行多种操作。
在一种可选的实现方式中,上述立体手势为摇动上述终端设备的幅度超过第一阈值且摇动上述终端设备的频率超过第二阈值的动作;或者,上述立体手势为翻转上述终端设备的动作;上述控制上述终端设备执行相应的操作包括:
截取上述终端设备显示的界面;
或者,调节上述终端设备的屏幕亮度;
或者,启动或关闭上述终端设备上的目标应用。
可以理解,上述立体手势为摇动终端设备的动作。上述第一阈值可以是0.5厘米、1厘米、2厘米、3厘米等。上述第二阈值可以是每秒5次、每秒3次、每3秒1次等。由于在实际使用中,经常发生摇动终端设备的情况,通过上述第一阈值和上述第二阈值对上述立体手势进行限定,可以避免误触发摇动终端设备的立体手势。举例来说,用户手持终端设备左右摇动该终端设备时,该终端设备调高屏幕的亮度;该用户手持终端设备上下摇动该终端设备时,该终端设备调低屏幕的亮度。上述目标应用可以是相机应用、支付类应用、社交应用、短信应用、日历应用、邮箱应用、阅读应用等,本发明实施例不作限定。举例来说,用户通过摇动终端设备启动该终端设备上的邮箱应用;再次摇动该终端设备后,该终端设备关闭该邮箱应用。
上述翻转上述终端设备的动作可以是将上述终端设备翻转的角度超过目标角度的动作。上述目标角度可以是30°、45°、60°、90°、180°等。举例来说,屏幕向上的终端设备翻转到屏幕向下时,翻转的角度为90°。举例来说,用户可以通过摇动终端设备进行截屏。又举例来说,用户可以通过翻转终端设备启动某个应用。又举例来说,终端设备在播放视 频的情况下,检测到翻转动作,关闭运行的视频播放程序。又举例来说,终端设备在检测到来电的情况下,检测到翻转动作,拒接该来电。
上述截取上述终端设备显示的界面、调节上述终端设备的屏幕亮度、启动上述终端设备上的目标应用仅为上述控制指令对应的控制操作的具体举例,本发明实施例中控制指令可以实现其他的控制操作。
本发明实施例中,可以通过翻转或摇动终端设备的立体手势快速地实现某个功能,操作简单。
在一种可选的实现方式中,上述第一操作为启动第一应用,第二操作为启动第二应用;上述第一应用场景为亮屏状态,上述第二应用场景为息屏状态;
或者,上述第一操作为截屏操作,上述第二操作为亮度调节操作;上述第一应用场景为显示游戏界面,上述第二应用场景为显示视频界面。
上述第一应用和上述第二应用为不同的应用。上述第一应用可以为阅读应用、邮箱应用、日历应用等。上述第二应用可以是相机应用、相册应用、地图应用等。本发明实施例不限定第一应用、第二应用、第一应用场景以及第二应用场景。本发明实施例中,一个立体手势可以对应三个或三个以上操作,本发明实施例对一个立体手势对应的操作的数量不作限定。举例来说,终端设备在息屏状态下检测到第一立体手势,启动日历应用;在亮屏状态下检测到该第一立体手势,启动地图应用。又举例来说,用户在使用终端设备玩游戏的时候,摇动该终端设备,该终端设备截取当前的游戏界面;用户在使用终端设备播放视频的时候,摇动该终端设备,该终端设备调节视频界面的亮度。
本发明实施例中,终端设备所处的状态不同,用户完成同一个立体手势终端设备执行的操作不同,通过一个立体手势可实现不同的功能,可提高操作效率。
在一种可选的实现方式中,上述根据检测的上述终端设备的立体手势和状态,控制上述终端设备执行相应的操作之前,上述方法还包括:
在接收到立体手势采集指令后,采集训练数据,上述训练数据为N个参考立体手势对应的N个动作数据,上述N个参考立体手势均对应上述立体手势;
采用神经网络算法对上述训练数据进行训练,得到上述立体手势对应的识别模型;
接收设置指令,依据上述设置指令设置上述立体手势对应的操作;
上述检测立体手势包括:
依据上述识别模型确定上述立体手势。
上述N为大于1的整数。可以理解,上述训练数据为上述终端设备采集的用户手持上述终端设备完成N次上述立体手势的动作数据。举例来说,用户想要设置某个立体手势,该用户可以在输入立体手势采集指令后,手持终端设备完成多次该立体手势;该终端设备采集相应的动作数据,并利用神经网络算法对采集到的动作数据进行训练,得到该动作数据对应的识别模型;该终端设备可以利用该识别模型识别该立体手势。上述终端设备接收到上述立体手势采集指令可以是检测到用户点击上述立体手势添加接口的操作。如图5和图6所示,用户点击立体手势添加接口后,终端设备显示手势添加界面。上述终端设备在显示上述手势添加界面的情况下,采集用户执行N次参考立体手势对应的N个动作数据,得到上述训练数据。举例来说,终端设备在显示图6所示的手势添加界面时,用户执行画 C手势操作,该终端设备采集该画C手势操作对应的训练数据。
下面提供一种建立立体手势对应的识别模型的具体举例:
在上述终端设备未建立立体手势对应的识别模型的情况下,上述终端设备对上述训练数据进行训练,得到上述立体手势对应的识别模型。
在上述终端设备已建立立体手势对应的识别模型的情况下,利用上述立体手势对应的识别模型识别上述训练数据,得到识别结果,上述识别结果指示上述训练数据对应的立体手势;显示上述识别结果、第一接口和第二接口,上述第一接口为确认上述识别结果的接口,上述第二接口为否认上述识别结果的结果;若接收到对上述第一接口的点击操作,则利用上述训练数据优化上述立体手势对应的识别模型;若接收到对上述第二接口的点击操作,则在上述立体手势对应的识别模型中增加上述训练数据对应的标签,利用上述训练模型对上述动作数据进行训练,更新上述立体手势识别模型。更新后的上述立体手势识别模型可以识别出上述立体手势。图12示例性的示出了识别结果界面的示意图,图中的1201为第一接口,1202为第二接口,1203为识别结果。
可选的,本发明实施例中,终端设备可以采用深度学习算法、机器学习算法等其他算法对训练数据进行训练,得到上述识别模型。上述终端设备对上述训练数据进行训练,得到上述立体手势对应的识别模型的过程如下;
1)上述终端设备将训练数据输入到训练模型;
上述训练模型可以采用三层神经网络,输入层的节点个数可以是300,隐藏层节点个数可以是15、20、25等,输出层节点个数为3。上述训练数据至少包含同一个立体手势对应的10组数据,表1中的每一行表示一组数据。可选的,上述训练模型可以采用其他的神经网络,本发明实施例不作限定。
2)利用上述训练模型对上述训练数据进行训练;
可选的,上述训练数据中的70%的数据作为训练数据,另外30%的数据作为验证数据。为了提高训练的精度,可以使用交叉训练的方法:每一次选取70%的训练数据进行训练;通过训练可以不断提高精度,直到识别模型的精度达到90%以上。
3)在验证数据的结果大于90%的情况下,停止训练,得到上述训练数据对应的识别模型。
可选的,当训练时间超过1分钟之后重新进行训练。可选的,如果验证数据的结果小于或等于90%,保存识别率最大的识别模型,同时反馈给用户继续输入立体手势,或者放弃该立体手势。可选的,可以训练多个识别模型,选取识别率最高的识别模型作为最终的识别模型。
上述终端设备通过执行上述操作,可以快速建立上述训练数据对应的识别模型。
本发明实施例中,可以参考前述设置立体手势的具体举例中的方式设置上述立体手势对应的操作。
本发明实施例中,终端设备采集训练数据,并确定上述训练数据对应的立体手势以及识别模型;一方面可以快速地建立上述立体手势对应的识别模型;另一方面用户可以快速地设置立体手势以及立体手势对应的操作,操作简单。
在一种可选的实现方式中,上述根据检测的上述终端设备的立体手势和状态,控制上 述终端设备执行相应的操作之后,上述方法还包括:
利用上述立体手势对应的动作数据更新上述识别模型。
上述利用上述动作数据更新上述识别模型可以是将上述动作数据和已有的训练数据输入训练模型进行训练,得到新的识别模型;上述已有的训练数据为上述识别模型对应的训练数据。可以理解,训练数据越多得到的识别模型识别立体手势的准确率越高。利用上述动作数据可以不断优化上述识别模型。也就是说,识别模型正确识别出立体手势的概率会越来越高。
本发明实施例中,利用动作数据可以进一步优化识别模型,实现简单。
本发明实施例提供了一种截屏方法,如图13所示,可包括:
1301、检测立体手势,上述立体手势与终端设备在立体空间的运动轨迹相关;
具体实现方式与图1中的101相同。上述立体手势为摇动上述终端设备的立体手势。
1302、检测终端设备的状态;
具体实现方式与图1中的102相同。上述终端设备处于亮屏状态。
1303、根据检测的上述终端设备的立体手势和状态,控制上述终端设备执行截屏操作,得到截图;
具体实现方式与图1中的103相同。本发明实施例中,终端设备在亮屏状态下,检测到摇动该终端设备的立体手势后,执行截屏操作,即截取该终端设备屏幕当前显示的界面。
1304、显示提示信息;
上述提示信息可以是“翻转终端设备,保留截图”。
1305、判断是否检测到翻转上述终端设备的立体手势;
若是,执行1306,若否,执行1307。上述判断是否检测到翻转上述终端设备的立体手势可以是判断在预设时长内是否检测到翻转上述终端设备的立体手势。上述预设时长可以是3秒、5秒、10秒等。
1306、存储上述截图;
上述存储上述截图可以将上述截图存储到相册、图库等。
1307、删除上述截图。
在实际应用中,用户可以通过摇动终端设备截取该终端设备显示的界面,得到截图;若在预设时长内翻转该终端设备,则存储截图;否则,删除该截图。用户还可以通过其他立体手势实现截屏操作。举例来说,终端设备在播放视频的情况下,用户手持该终端设备逆时针画个半圆,该终端设备执行截屏操作,得到截图,并显示“翻转终端设备,保留截图”的提示信息;该用户翻转该终端设备,存储该截图。
本发明实施例中,可以快速地完成截屏操作,操作简单。
图14示出了本发明实施例提供的一种终端设备的功能框图。终端设备的功能块可由硬件、软件或硬件与软件的组合来实施本发明方案。所属领域的技术人员应理解,图14中所描述的功能块可经组合或分离为若干子块以实施本发明方案。因此,本发明中上面描述的内容可支持对下述功能模块的任何可能的组合或分离或进一步定义。
如图14所示,终端设备可包括:
第一检测单元1401,用于检测立体手势,上述立体手势与终端设备在立体空间的运动 轨迹相关;
第二检测单元1402,用于检测上述终端设备的状态;
控制单元1403,用于根据检测的上述终端设备的立体手势和状态,控制上述终端设备执行相应的操作。
具体实现方法与图1中的方法相同,这里不作详述。
在一种可选的实现方式中,上述终端设备的状态包括第一应用场景或第二应用场景;
上述控制单元1403,具体用于根据检测的上述立体手势和上述第一应用场景,控制上述终端设备执行第一操作;或具体用于根据检测的上述立体手势和上述第二应用场景,控制上述终端设备执行第二操作;上述第一操作不同于上述第二操作。
在一种可选的实现方式中,上述终端设备的状态包括第一应用场景或第二应用场景;
上述控制单元1403,具体用于根据检测的上述立体手势和上述第一应用场景,控制上述终端设备执行第一操作;或具体用于根据检测的上述立体手势和上述第二应用场景,控制上述终端设备执行第二操作;上述第一操作与上述第二操作相同。
在一种可选的实现方式中,上述第一检测单元1701,还用于检测用户设置的立体手势;上述终端设备还包括:
存储单元1404,用于保存用户设置的立体手势与上述终端设备执行的操作的对应关系。
在一种可选的实现方式中,上述终端设备的状态包括处于第一时间段或第二时间段;
上述控制单元1403,具体用于在上述终端设备处于上述第一时间段的情况下,控制上述终端设备执行第一操作;或具体用于在上述终端设备处于上述第二时间段的情况下,控制上述终端设备执行第二操作;上述第一操作不同于上述第二操作。
在一种可选的实现方式中,上述第一检测单元1401,具体用于获得上述终端设备的运动轨迹对应的目标字符;根据上述终端设备的状态获得上述目标字符对应的操作;
上述控制单元1403,具体用于控制上述终端设备执行上述目标字符对应的操作。
在一种可选的实现方式中,上述立体手势为摇动上述终端设备的幅度超过第一阈值且摇动上述终端设备的频率超过第二阈值的动作;或者,上述立体手势为翻转上述终端设备的动作;
上述控制单元1403,具体用于截取上述终端设备显示的界面;
或者,具体用于调节上述终端设备的屏幕亮度;
或者,具体用于启动或关闭上述终端设备上的目标应用。
在一种可选的实现方式中,上述控制单元1403,具体用于在上述终端设备检测到第一立体手势且处于亮屏状态下,控制上述终端设备执行截屏操作;上述第一立体手势对应上述截屏操作;
或者,具体用于在上述终端设备检测到第二立体手势且处于音乐播放状态下,控制上述终端设备执行音量调节操作;
或者,具体用于在上述终端设备检测到第三立体手势且处于视频播放状态下,控制上述终端设备执行亮度调节操作;
或者,具体用于在上述终端设备检测到第四立体手势且所处环境的照度小于第一照度的情况下,控制上述终端设备启动手电筒功能;
或者,具体用于在上述终端设备检测到第五立体手势且所处环境的照度大于第二照度的情况下,控制上述终端设备启动拍照功能。
在一种可选的实现方式中,上述第一操作为启动第一应用,第二操作为启动第二应用;上述第一应用场景为亮屏状态,上述第二应用场景为息屏状态;
或者,上述第一操作为截屏操作,上述第二操作为亮度调节操作;上述第一应用场景为显示游戏界面,上述第二应用场景为显示视频界面。
在一种可选的实现方式中,上述终端设备还包括:
接收单元1405,用于接收立体手势采集指令;
采集单元1406,用于采集训练数据,上述训练数据为N个参考立体手势对应的N个动作数据,上述N个参考立体手势均对应上述立体手势;
上述接收单元1405,还用于接收设置指令,依据上述设置指令设置上述立体手势对应的操作;
上述第一检测单元,具体用于依据上述识别模型确定上述立体手势。
在一种可选的实现方式中,上述终端设备还包括:
更新单元1407,用于利用上述立体手势对应的动作数据更新上述识别模型。
参见图15,是本发明另一实施例提供的一种终端设备的示意框图。如图15所示,本实施例中的终端设备可以包括:一个或多个处理器1501;一个或多个输入设备1502,一个或多个输出设备1503和存储器1504。上述处理器1501、输入设备1502、输出设备1503和存储器1504通过总线1505连接。存储器1502用于存储计算机程序,上述计算机程序包括程序指令,处理器1501用于执行存储器1502存储的程序指令。其中,处理器1501被配置用于调用上述程序指令执行:检测立体手势,上述立体手势与终端设备在立体空间的运动轨迹相关;检测上述终端设备的状态;根据检测的上述终端设备的立体手势和状态,控制上述终端设备执行相应的操作。
应当理解,在本发明实施例中,所称处理器1501可以是中央处理单元(Central Processing Unit,CPU),该处理器还可以是其他通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现成可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。上述处理器1501可以实现如图14所示的控制单元1403、第二检测单元1402以及更新单元1407的功能。相应地,上述处理器1501也可以实现前述方法实施例中其他的数据处理功能以及控制功能。
输入设备1502可以包括触控板、指纹采传感器(用于采集用户的指纹信息和指纹的方向信息)、麦克风、重力传感器、陀螺仪等,输出设备1503可以包括显示器(LCD等)、扬声器等。上述重力传感器用于检测加速度,上述陀螺仪用于检测角速度。上述输入设备1502可以实现如图14所示的第一检测单元1501、接收单元1405以及采集单元1406的功能。具体的,上述输入设备1502可通过触控板接收用户发送的指令;通过重力传感器、陀螺仪等获取运动数据。
该存储器1504可以包括只读存储器和随机存取存储器,并向处理器1501提供指令和 数据。存储器1504的一部分还可以包括非易失性随机存取存储器。例如,存储器1504还可以存储设备类型的信息。上述存储器1504可以实现如图14所示的存储单元1404的功能。
具体实现中,本发明实施例中所描述的处理器1501、输入设备1502、输出设备1503以及存储器1504可执行本发明实施例提供的终端设备的控制方法所描述的实现方式,也可执行本发明实施例所描述的终端设备的实现方式,在此不再赘述。
在本发明的另一实施例中提供一种计算机可读存储介质,上述计算机可读存储介质存储有计算机程序,上述计算机程序包括程序指令,上述程序指令被处理器执行时实现:采集动作数据,上述动作数据为上述终端设备的运动状态数据;检测立体手势,上述立体手势与终端设备在立体空间的运动轨迹相关;检测上述终端设备的状态;根据检测的上述终端设备的立体手势和状态,控制上述终端设备执行相应的操作。
上述计算机可读存储介质可以是前述任一实施例上述的装置的内部存储单元,例如装置的硬盘或内存。上述计算机可读存储介质也可以是上述装置的外部存储设备,例如上述装置上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。进一步地,上述计算机可读存储介质还可以既包括上述装置的内部存储单元也包括外部存储设备。上述计算机可读存储介质用于存储上述计算机程序以及上述装置所需的其他程序和数据。上述计算机可读存储介质还可以用于暂时地存储已经输出或者将要输出的数据。
以上所述,仅为本发明的具体实施方式,但本发明的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本发明揭露的技术范围内,可轻易想到各种等效的修改或替换,这些修改或替换都应涵盖在本发明的保护范围之内。因此,本发明的保护范围应以权利要求的保护范围为准。

Claims (24)

  1. 一种终端设备的控制方法,其特征在于,包括:
    检测立体手势,所述立体手势与终端设备在立体空间的运动轨迹相关;
    检测所述终端设备的状态;
    根据检测的所述终端设备的立体手势和状态,控制所述终端设备执行相应的操作。
  2. 根据权利要求1所述的方法,其特征在于,
    所述终端设备的状态包括第一应用场景或第二应用场景;
    所述根据检测的所述终端设备的立体手势和状态,控制所述终端设备执行相应的操作包括:
    根据检测的所述立体手势和所述第一应用场景,控制所述终端设备执行第一操作;
    根据检测的所述立体手势和所述第二应用场景,控制所述终端设备执行第二操作;
    所述第一操作不同于所述第二操作。
  3. 根据权利要求1所述的方法,其特征在于,
    所述终端设备的状态包括第一应用场景或第二应用场景;
    所述根据检测的所述终端设备的立体手势和状态,控制所述终端设备执行相应的操作包括:
    根据检测的所述立体手势和所述第一应用场景,控制所述终端设备执行第一操作;
    根据检测的所述立体手势和所述第二应用场景,控制所述终端设备执行第二操作;
    所述第一操作与所述第二操作相同。
  4. 根据权利要求1至3任意一项所述的方法,其特征在于,在根据检测的所述终端设备的立体手势和状态,控制所述终端设备执行相应的操作之前,所述方法还包括:
    所述终端设备检测并保存用户设置的立体手势与所述终端设备执行的操作的对应关系。
  5. 根据权利要求4所述的方法,其特征在于,
    所述终端设备的状态包括处于第一时间段或第二时间段;
    所述根据检测的所述终端设备的立体手势和状态,控制所述终端设备执行相应的操作包括;
    在所述终端设备处于所述第一时间段的情况下,控制所述终端设备执行第一操作;
    在所述终端设备处于所述第二时间段的情况下,控制所述终端设备执行第二操作;
    所述第一操作不同于所述第二操作。
  6. 根据权利要求4所述的方法,其特征在于,所述根据检测的所述终端设备的立体手势和状态,控制所述终端设备执行相应的操作包括:
    获得所述终端设备的运动轨迹对应的目标字符;
    根据所述终端设备的状态获得所述目标字符对应的操作;
    控制所述终端设备执行所述目标字符对应的操作。
  7. 根据权利要求4所述的方法,其特征在于,所述立体手势为摇动所述终端设备的幅度超过第一阈值且摇动所述终端设备的频率超过第二阈值的动作;或者,所述立体手势为翻转所述终端设备的动作;所述控制所述终端设备执行相应的操作包括:
    截取所述终端设备显示的界面;
    或者,调节所述终端设备的屏幕亮度;
    或者,启动或关闭所述终端设备上的目标应用。
  8. 根据权利要求4所述的方法,其特征在于,所述根据检测的所述终端设备的立体手势和状态,控制所述终端设备执行相应的操作包括:
    在所述终端设备检测到第一立体手势且处于亮屏状态下,控制所述终端设备执行截屏操作;
    或者,在所述终端设备检测到第二立体手势且处于音乐播放状态下,控制所述终端设备执行音量调节操作;
    或者,在所述终端设备检测到第三立体手势且处于视频播放状态下,控制所述终端设备执行亮度调节操作;
    或者,在所述终端设备检测到第四立体手势且所处环境的照度小于第一照度的情况下,控制所述终端设备启动手电筒功能;
    或者,在所述终端设备检测到第五立体手势且所处环境的照度大于第二照度的情况下,控制所述终端设备启动拍照功能。
  9. 根据权利要求2所述的方法,其特征在于,所述第一操作为启动第一应用,第二操作为启动第二应用;所述第一应用场景为亮屏状态,所述第二应用场景为息屏状态;
    或者,所述第一操作为截屏操作,所述第二操作为亮度调节操作;所述第一应用场景为显示游戏界面,所述第二应用场景为显示视频界面。
  10. 根据权利要求1-9任意一项所述的方法,其特征在于,所述根据检测的所述终端设备的立体手势和状态,控制所述终端设备执行相应的操作之前,所述方法还包括:
    在接收到立体手势采集指令后,采集训练数据,所述训练数据为N个参考立体手势对应的N个动作数据,所述N个参考立体手势均对应所述立体手势;
    采用神经网络算法对所述训练数据进行训练,得到所述立体手势对应的识别模型;
    接收设置指令,依据所述设置指令设置所述立体手势对应的操作;
    所述检测立体手势包括:
    依据所述识别模型确定所述立体手势。
  11. 根据权利要求10所述的方法,其特征在于,所述根据检测的所述终端设备的立体手势和状态,控制所述终端设备执行相应的操作之后,所述方法还包括:
    利用所述立体手势对应的动作数据更新所述识别模型。
  12. 一种终端设备,其特征在于,包括:
    第一检测单元,用于检测立体手势,所述立体手势与终端设备在立体空间的运动轨迹相关;
    第二检测单元,用于检测所述终端设备的状态;
    控制单元,用于根据检测的所述终端设备的立体手势和状态,控制所述终端设备执行相应的操作。
  13. 根据权利要求12所述的终端设备,其特征在于,所述终端设备的状态包括第一应用场景或第二应用场景;
    所述控制单元,具体用于根据检测的所述立体手势和所述第一应用场景,控制所述终端设备执行第一操作;或具体用于根据检测的所述立体手势和所述第二应用场景,控制所述终端设备执行第二操作;所述第一操作不同于所述第二操作。
  14. 根据权利要求12所述的终端设备,其特征在于,所述终端设备的状态包括第一应用场景或第二应用场景;
    所述控制单元,具体用于根据检测的所述立体手势和所述第一应用场景,控制所述终端设备执行第一操作;或具体用于根据检测的所述立体手势和所述第二应用场景,控制所述终端设备执行第二操作;所述第一操作与所述第二操作相同。
  15. 根据权利要求12至14任意一项所述的终端设备,其特征在于,所述第一检测单元,还用于检测用户设置的立体手势;所述终端设备还包括:
    存储单元,用于保存用户设置的立体手势与所述终端设备执行的操作的对应关系。
  16. 根据权利要求15所述的终端设备,其特征在于,所述终端设备的状态包括处于第一时间段或第二时间段;
    所述控制单元,具体用于在所述终端设备处于所述第一时间段的情况下,控制所述终端设备执行第一操作;或具体用于在所述终端设备处于所述第二时间段的情况下,控制所述终端设备执行第二操作;所述第一操作不同于所述第二操作。
  17. 根据权利要求15所述的终端设备,其特征在于,所述第一检测单元,具体用于获得所述终端设备的运动轨迹对应的目标字符;根据所述终端设备的状态获得所述目标字符对应的操作;
    所述控制单元,具体用于控制所述终端设备执行所述目标字符对应的操作。
  18. 根据权利要求15所述的终端设备,其特征在于,所述立体手势为摇动所述终端设备的幅度超过第一阈值且摇动所述终端设备的频率超过第二阈值的动作;或者,所述立体手势为翻转所述终端设备的动作;
    所述控制单元,具体用于截取所述终端设备显示的界面;
    或者,具体用于调节所述终端设备的屏幕亮度;
    或者,具体用于启动或关闭所述终端设备上的目标应用。
  19. 根据权利要求15所述的终端设备,其特征在于,
    所述控制单元,具体用于在所述终端设备检测到第一立体手势且处于亮屏状态下,控制所述终端设备执行截屏操作;
    或者,具体用于在所述终端设备检测到第二立体手势且处于音乐播放状态下,控制所述终端设备执行音量调节操作;
    或者,具体用于在所述终端设备检测到第三立体手势且处于视频播放状态下,控制所述终端设备执行亮度调节操作;
    或者,具体用于在所述终端设备检测到第四立体手势且所处环境的照度小于第一照度 的情况下,控制所述终端设备启动手电筒功能;
    或者,具体用于在所述终端设备检测到第五立体手势且所处环境的照度大于第二照度的情况下,控制所述终端设备启动拍照功能。
  20. 根据权利要求13所述的终端设备,其特征在于,所述第一操作为启动第一应用,第二操作为启动第二应用;所述第一应用场景为亮屏状态,所述第二应用场景为息屏状态;
    或者,所述第一操作为截屏操作,所述第二操作为亮度调节操作;所述第一应用场景为显示游戏界面,所述第二应用场景为显示视频界面。
  21. 根据权利要求12至20任意一项所述的终端设备,其特征在于,所述终端设备还包括:
    接收单元,用于接收立体手势采集指令;
    采集单元,用于采集训练数据,所述训练数据为N个参考立体手势对应的N个动作数据,所述N个参考立体手势均对应所述立体手势;
    所述接收单元,还用于接收设置指令,依据所述设置指令设置所述立体手势对应的操作;
    所述第一检测单元,具体用于依据所述识别模型确定所述立体手势。
  22. 根据权利要求21所述的方法,其特征在于,所述终端设备还包括:
    更新单元,用于利用所述立体手势对应的动作数据更新所述识别模型。
  23. 一种终端设备,其特征在于,包括处理器和存储器,所述处理器和存储器相互连接,其中,所述存储器用于存储计算机程序,所述计算机程序包括程序指令,所述处理器被配置用于调用所述程序指令,执行如权利要求1-11任一项所述的方法。
  24. 一种计算机可读存储介质,其特征在于,所述计算机存储介质存储有计算机程序,所述计算机程序包括程序指令,所述程序指令当被处理器执行时使所述处理器执行如权利要求1-11任一项所述的方法。
PCT/CN2017/118117 2017-12-22 2017-12-22 终端设备的控制方法、终端设备及计算机可读介质 WO2019119450A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201780090095.2A CN110573999A (zh) 2017-12-22 2017-12-22 终端设备的控制方法、终端设备及计算机可读介质
PCT/CN2017/118117 WO2019119450A1 (zh) 2017-12-22 2017-12-22 终端设备的控制方法、终端设备及计算机可读介质

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2017/118117 WO2019119450A1 (zh) 2017-12-22 2017-12-22 终端设备的控制方法、终端设备及计算机可读介质

Publications (1)

Publication Number Publication Date
WO2019119450A1 true WO2019119450A1 (zh) 2019-06-27

Family

ID=66992936

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/118117 WO2019119450A1 (zh) 2017-12-22 2017-12-22 终端设备的控制方法、终端设备及计算机可读介质

Country Status (2)

Country Link
CN (1) CN110573999A (zh)
WO (1) WO2019119450A1 (zh)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105635948A (zh) * 2015-12-31 2016-06-01 上海创功通讯技术有限公司 一种数据发送方法及发送模块
US20160248899A1 (en) * 2014-10-08 2016-08-25 Lg Electronics Inc. Mobile terminal and controlling method thereof
CN106249990A (zh) * 2016-07-19 2016-12-21 宇龙计算机通信科技(深圳)有限公司 应用程序的管理方法、管理装置及终端
CN106878543A (zh) * 2016-12-29 2017-06-20 宇龙计算机通信科技(深圳)有限公司 一种终端操作管理方法、装置及终端

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103812996B (zh) * 2012-11-08 2018-10-09 腾讯科技(深圳)有限公司 一种信息提示方法、装置及终端
CN104750239B (zh) * 2013-12-30 2017-12-19 中国移动通信集团公司 一种基于空间手势访问终端设备中的应用的方法和设备
CN105094659A (zh) * 2014-05-19 2015-11-25 中兴通讯股份有限公司 一种基于手势对应用程序进行操作的方法及终端
CN105988693A (zh) * 2015-02-03 2016-10-05 中兴通讯股份有限公司 应用控制方法及装置
CN106161556A (zh) * 2015-04-20 2016-11-23 中兴通讯股份有限公司 一种终端的控制方法及装置
CN105843533A (zh) * 2016-03-15 2016-08-10 乐视网信息技术(北京)股份有限公司 一种列表的调用方法及装置
CN105867818A (zh) * 2016-03-30 2016-08-17 乐视控股(北京)有限公司 一种终端交互控制装置
CN106227350B (zh) * 2016-07-28 2019-07-09 青岛海信电器股份有限公司 基于手势进行操作控制的方法及智能设备
CN106254551A (zh) * 2016-09-30 2016-12-21 北京珠穆朗玛移动通信有限公司 一种双系统的文件传输方法及移动终端
CN106973330B (zh) * 2017-03-20 2021-03-02 腾讯科技(深圳)有限公司 一种屏幕直播方法、装置和系统
CN107239199A (zh) * 2017-06-29 2017-10-10 珠海市魅族科技有限公司 一种操作响应的方法及相关装置

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160248899A1 (en) * 2014-10-08 2016-08-25 Lg Electronics Inc. Mobile terminal and controlling method thereof
CN105635948A (zh) * 2015-12-31 2016-06-01 上海创功通讯技术有限公司 一种数据发送方法及发送模块
CN106249990A (zh) * 2016-07-19 2016-12-21 宇龙计算机通信科技(深圳)有限公司 应用程序的管理方法、管理装置及终端
CN106878543A (zh) * 2016-12-29 2017-06-20 宇龙计算机通信科技(深圳)有限公司 一种终端操作管理方法、装置及终端

Also Published As

Publication number Publication date
CN110573999A (zh) 2019-12-13

Similar Documents

Publication Publication Date Title
US11762494B2 (en) Systems and methods for identifying users of devices and customizing devices to users
CN107771312B (zh) 基于用户输入和当前上下文来选择事件
US10187520B2 (en) Terminal device and content displaying method thereof, server and controlling method thereof
KR102342267B1 (ko) 휴대 장치 및 휴대 장치의 화면 변경방법
US10534434B2 (en) Apparatus and method for using blank area in screen
US9924018B2 (en) Multi display method, storage medium, and electronic device
CN110476189B (zh) 用于在电子装置中提供增强现实功能的方法和设备
EP2680110B1 (en) Method and apparatus for processing multiple inputs
US9880640B2 (en) Multi-dimensional interface
US9262867B2 (en) Mobile terminal and method of operation
CN103688273B (zh) 辅助弱视用户进行图像拍摄和图像回顾
WO2017070971A1 (zh) 一种面部验证的方法和电子设备
CN106020796A (zh) 界面显示方法及装置
CN104991910B (zh) 相册创建方法及装置
CN107666536B (zh) 一种寻找终端的方法和装置、一种用于寻找终端的装置
CN105335714B (zh) 照片处理方法、装置和设备
WO2016154893A1 (zh) 一种处理与应用关联的新消息的方法和装置
CN108463799A (zh) 电子设备的柔性显示器及其操作方法
CN109214301A (zh) 基于人脸识别和手势识别的控制方法及装置
CN108108671A (zh) 产品说明信息获取方法及装置
CN105867794A (zh) 一种锁屏壁纸相关信息获取方法及装置
CN107704190A (zh) 手势识别方法、装置、终端及存储介质
CN106534658A (zh) 控制摄像头拍照的方法、装置及移动终端
CN105426904B (zh) 照片处理方法、装置和设备
JP2014206837A (ja) 電子機器、その制御方法及びプログラム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17935526

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17935526

Country of ref document: EP

Kind code of ref document: A1