WO2021000683A1 - 一种手势识别方法、装置及计算机可读存储介质 - Google Patents

一种手势识别方法、装置及计算机可读存储介质 Download PDF

Info

Publication number
WO2021000683A1
WO2021000683A1 PCT/CN2020/093807 CN2020093807W WO2021000683A1 WO 2021000683 A1 WO2021000683 A1 WO 2021000683A1 CN 2020093807 W CN2020093807 W CN 2020093807W WO 2021000683 A1 WO2021000683 A1 WO 2021000683A1
Authority
WO
WIPO (PCT)
Prior art keywords
gesture
event
full
screen
state
Prior art date
Application number
PCT/CN2020/093807
Other languages
English (en)
French (fr)
Inventor
栾岚
Original Assignee
中兴通讯股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中兴通讯股份有限公司 filed Critical 中兴通讯股份有限公司
Priority to US17/624,058 priority Critical patent/US20220357842A1/en
Priority to EP20834882.1A priority patent/EP3985485A4/en
Priority to KR1020217034210A priority patent/KR20210139428A/ko
Publication of WO2021000683A1 publication Critical patent/WO2021000683A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces

Definitions

  • the embodiments of the present invention relate to, but are not limited to, terminal technology, in particular to a gesture recognition method, device, and computer-readable storage medium.
  • buttons in smart terminals has undergone a transition from the manipulation of physical buttons to software virtual buttons that maximize the use of the screen.
  • the latest development trend is to use the entire screen as a space for users to operate, completely abandoning the occupation of the screen.
  • the button prompt of the position directly recognizes the user's gesture to respond to the user's operation.
  • the software can make full use of the size of the mobile phone screen, display its own interface to the greatest extent, and greatly enhance the user experience.
  • One is directly implemented at the system level, with higher processing efficiency, but the interaction with the desktop is not high, and the disconnection is serious; the other is implemented on the application side, which is similar to the application itself.
  • It is more flexible than the system-level implementation scheme can freely implement animation, has stronger interaction with the user, and is out of touch with the system. Once the application is in an inactive state, it cannot recognize gestures.
  • the embodiments of the present invention provide a gesture recognition method and device, which aim to solve one of the technical problems in the related technology at least to a certain extent.
  • the embodiment of the present invention provides a gesture recognition method, including: realizing at the system level: capturing the gesture event to obtain parameter information of the gesture event; realizing through the desktop application: when the gesture event is determined to satisfy the full-screen gesture according to the parameter information of the gesture event When the recognition is required, the full-screen gesture is recognized according to the parameter information of the gesture event, and the state is switched according to the recognized full-screen gesture.
  • the embodiment of the present invention provides a gesture recognition device, which includes: an event capture module, which is used to implement at the system level: capture gesture events to obtain parameter information of gesture events; and a gesture recognition module, which is used to implement through desktop applications: When the parameter information of the gesture event determines that the gesture event meets the requirements of full-screen gesture recognition, the full-screen gesture is recognized according to the parameter information of the gesture event; the state switching module is used to implement through the desktop application: the state is switched according to the recognized full-screen gesture .
  • An embodiment of the present invention provides a gesture recognition device, which includes a processor and a computer-readable storage medium.
  • the computer-readable storage medium stores instructions. When the instructions are executed by the processor, any one of the foregoing is implemented.
  • Kind of gesture recognition method is implemented.
  • the embodiment of the present invention provides a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, the steps of any of the aforementioned gesture recognition methods are implemented.
  • FIG. 1 is a flowchart of a gesture recognition method proposed by an embodiment of the present invention
  • Fig. 2 is an exemplary flowchart of a gesture recognition process according to an embodiment of the present invention
  • FIG. 3 is an exemplary flow chart of the process of gesture recognition and state switching by desktop gesture consumers according to an embodiment of the present invention
  • FIG. 4 is an exemplary flow chart of the process of gesture recognition and state switching by non-desktop gesture consumers according to an embodiment of the present invention
  • FIG. 5 is a schematic diagram of the structural composition of a gesture recognition device provided by another embodiment of the present invention.
  • Fig. 6 is an exemplary schematic diagram of a gesture recognition device according to an embodiment of the present invention.
  • FIG. 7 is a schematic diagram of the hardware structure of a gesture recognition device according to an embodiment of the present invention.
  • the current mainstream full-screen gesture recognition solutions mainly include the following two.
  • an embodiment of the present invention provides a gesture recognition method, including:
  • Step 100 Realize at the system level: capture the gesture event to obtain parameter information of the gesture event.
  • a high-level window ie, gesture recognition hot zone
  • the solution can recognize full-screen gestures of desktop applications and full-screen gestures of non-desktop applications, and will not be unable to recognize full-screen gestures because the foreground application is a non-desktop application.
  • the gesture event may be a touch event or a non-touch event, which is not limited in the embodiment of the present invention.
  • the parameter information of the gesture event is contained in the MotionEvent object.
  • the parameter information of the gesture event includes all parameter information of the user operating the touch screen, including position, event occurrence, event type, finger information on which the touch occurred, and so on. These parameters are important parameters for gesture recognition.
  • the position refers to the positions of all touch points of the gesture event.
  • occurrence of an event refers to whether an event has occurred:
  • the event type refers to whether the event that occurs is a down event, a move event, or an up event.
  • the finger information on which the touch occurred refers to the number of fingers on which the touch occurred, and so on.
  • Step 101 Implement through a desktop application: when it is determined that the gesture event meets the full-screen gesture recognition requirement according to the parameter information of the gesture event, the full-screen gesture recognition is performed according to the parameter information of the gesture event, and the state is switched according to the recognized full-screen gesture.
  • full-screen gestures refer to actions originally defined by the system layer, mainly switching to the desktop application (Home), displaying the recently used application (Recent), and returning to the recently used application (Back).
  • the full-screen gesture recognition requirements include: the finger information of the touch is 1 and the movement direction of the gesture event is a preset direction. Among them, the preset direction is up, down, left, right and so on.
  • performing full-screen gesture recognition according to the parameter information of the gesture event includes:
  • the gesture event When it is detected that the gesture event needs to be consumed, and the gesture event includes a press event, a movement event, and a lift event, determine the movement distance of the lift event relative to the press event according to the parameter information of the lift event and / Or moving speed;
  • the full-screen gesture is recognized according to the moving distance and/or moving speed of the lifting event relative to the pressing event.
  • the gesture event needs to be consumed when it is detected according to the following conditions:
  • the current state is in normal mode or non-editing mode, and the press event occurs in the gesture recognition hot zone.
  • the gesture recognition hot zone refers to a predefined area, which may be any area on the screen.
  • the method when the foreground application is a non-desktop application when the gesture event is captured, and the gesture event is a press event, the method further includes: switching the current state to the state of entering the most recent application .
  • recognizing the full-screen gesture according to the moving distance and/or moving speed of the lifting event relative to the pressing event includes at least one of the following:
  • determining that the gesture is a first gesture, such as a pull-up gesture
  • determining that the gesture is a second gesture, such as a quick pull-up gesture
  • the process is ended or the relevant parameters of gesture recognition are reinitialized, such as the current state of the mobile terminal, the first preset distance threshold, the second preset distance threshold, and the preset speed threshold, etc. .
  • the state switching according to the recognized gesture includes at least one of the following:
  • the gesture is the first gesture, such as a pull-up gesture, it is determined that the current state needs to be switched to the state of entering the most recent application, and the motion effect of switching from the current state to the state of entering the most recent application is executed;
  • the gesture is the second gesture, such as a quick pull-up gesture
  • the motion effect of switching from the current state to the desktop is executed, and the current page of the desktop is moved to the designated homepage .
  • the first preset distance threshold is related to the display size of the current screen (that is, the number of pixels contained in each dp) and the display direction of the screen.
  • the position coordinate Pdown of the press event needs to be recorded.
  • the method further includes: reinitializing the relevant parameters of gesture recognition, such as the current state of the mobile terminal, the first preset distance threshold, and the second Preset distance threshold and preset speed threshold, etc.
  • the method further includes: synchronizing the current state.
  • the performing full-screen gesture recognition according to the parameter information of the gesture event further includes:
  • determining that the gesture is a third gesture, such as a pause gesture
  • the state switching according to the recognized gesture includes:
  • the gesture is a third gesture, such as a pause gesture, it is determined that the current state needs to be switched to the state of entering the most recent application, and the motion effect of switching from the current state to the state of entering the most recent application is executed.
  • the performing full-screen gesture recognition according to the parameter information of the gesture event further includes:
  • the state switching according to the recognized full-screen gesture includes:
  • the full-screen gesture is the second gesture, it is determined that the current state needs to be switched to the desktop, the motion effect of switching from the current state to the desktop is executed, and the current page of the desktop is moved to the designated homepage.
  • the method further includes: resetting all gesture recognition states.
  • the method before capturing the gesture event to obtain the parameter information of the gesture event, the method further includes: initializing the entire system.
  • initializing the entire system includes:
  • the current navigation bar type is a full-screen gesture type
  • the current state of the mobile terminal includes: the screen size of the mobile terminal, the current horizontal and vertical screen state of the mobile terminal, and the gesture recognition hot zone parameter of the mobile terminal.
  • the current horizontal and vertical screen state of the mobile terminal includes: horizontal screen state or vertical screen state;
  • the gesture recognition hot zone of a mobile terminal refers to a rectangular space defined in association with the size of the mobile terminal + the horizontal and vertical screen state of the screen.
  • said initializing the entire system further includes:
  • Register a monitoring object monitor whether the navigation bar type is switched through the monitoring object, and when the navigation bar type is monitored to switch, re-execute the step of obtaining the current navigation bar type from the system.
  • the first point of user operation the user’s operation is planned and designed uniformly, and the recognition and state switching of the full-screen gestures are placed on the application side.
  • the visual effect design space of each page Larger, you can design more visual effects and interaction scenarios that are more in line with user operations, and interact with the system level. Through the system level, other visual effect resources are controlled to be coordinated in full-screen gesture recognition and state switching, so as not to be out of touch.
  • the gesture recognition process includes:
  • Step 200 The gesture event is captured at the system level, and a MotionEvent object is generated, and the MotionEvent object contains parameter information of the gesture event.
  • the parameter information of the gesture event includes all the parameter information of the user operating the touch screen, including the position, the occurrence of the event, the type of the event, and the finger information of the touch. These parameters are important parameters for gesture recognition.
  • the position refers to the positions of all touch points of the gesture event.
  • occurrence of an event refers to whether an event has occurred:
  • the event type refers to whether the event that occurs is a down event, a move event, or an up event.
  • the finger information on which the touch occurred refers to the number of fingers on which the touch occurred, and so on.
  • Step 201 When the requirements for full-screen gesture recognition are met, a gesture event consumer is selected according to the current operation scenario, and the MotionEvent object is forwarded to the selected gesture event consumer. Specifically, when the current operation scene is that the foreground application is a desktop application, select the desktop gesture consumer, forward the MotionEvent object to the desktop gesture consumer, and continue to perform steps 202 and 204; when the current operation scene is the foreground application For non-desktop applications, select the non-desktop gesture consumer, forward the MotionEvent object to the non-desktop gesture consumer, and continue to perform step 203 and step 204.
  • the operation scenario includes at least one of the following:
  • the desktop application recognizes full-screen gestures when in the foreground, that is, recognizes full-screen gestures on the desktop;
  • the non-desktop application recognizes full-screen gestures in the foreground, that is, the user opens the non-desktop application and the non-desktop application is in an activated state.
  • Step 202 Perform full-screen gesture recognition and state switching through the desktop gesture consumer.
  • Step 203 Perform full-screen gesture recognition and state switching through non-desktop gesture consumers.
  • Step 204 Perform state synchronization. Specifically, the switched state is synchronized to the system and other applications.
  • the process of recognizing the full-screen gesture and state switching of the consumer through the desktop gesture includes:
  • Step 300 Monitoring whether the gesture event needs to be consumed. Monitoring rules: The current state can perform full-screen gestures, that is, the desktop is in normal mode (or non-editing mode); whether the press event occurs in the gesture recognition hot zone.
  • the gesture recognition hot zone refers to a predefined area, which can be any area on the screen.
  • Step 301 When it is detected that a gesture event needs to be consumed, it is necessary to first identify the press (EVENT_DOWN) event, the movement event and the lift event in the gesture event, record the position coordinate Pdown of the EVENT_DOWN event, and initialize the first preset distance threshold.
  • the first preset distance threshold is defined here as Tslide.
  • the first preset distance threshold is related to the current display size (that is, the number of pixels included in each dp) and the current display direction of the screen.
  • Step 302 When the gesture event includes a pressing event, a moving event, and a lifting event, determine whether the moving distance of the lifting event relative to the pressing event is greater than a first preset distance threshold. When the moving distance is greater than the first preset distance threshold, it is determined that the current state needs to be switched to the recently applied state Srecent, and the motion effect from the current state to Srecent is executed; when the moving distance is less than or equal to the first preset distance threshold, continue Go to step 303.
  • Step 303 Determine whether the moving speed of the lifting event relative to the pressing event is greater than a preset speed threshold.
  • the moving speed is greater than the preset speed threshold, it is determined that the current state needs to be switched to the desktop Shome, the motion effect of switching from the current state to the Shome is executed, and the current page of the desktop is moved to the homepage specified by the user; when the moving speed is less than or equal to the preset
  • step 306 is executed.
  • Step 304 When the gesture event only includes the press event and the movement event, the user's action is still running, that is, the EVENT_UP event is not captured, that is, when the user does not raise his hand at this time, he needs to respond to subsequent events. Identify again.
  • the specific identification step includes step 305.
  • Step 305 When the movement distance of the last movement event relative to the press event is greater than the first preset distance threshold, the judgment conditions above cannot be processed at this time, and the most recent parameter information of the gesture event needs to be retrieved from the previous buffer.
  • Parameter information of n gesture events monitor whether the sum of the movement distances of n gesture events Dad is less than the second preset distance threshold, when the sum of the movement distances of n gesture events Dad is less than the second preset distance threshold, determine The current state needs to be switched to the state of entering the most recent application Srecent, and the motion effect of switching from the current state to the state of entering the most recent application is executed, and all the gesture recognition states are reset, without waiting for the EVENT_UP event; when the movement distance of n gesture events When the sum is greater than or equal to the second preset distance threshold, return to step 300 to continue execution.
  • the mark bit When the movement speed of the last movement event relative to the pressing event is greater than the preset speed threshold, the mark bit is recorded and the event is consumed; when the mark bit is T, it is determined that the current state needs to be switched to the desktop Shome, and the execution is switched from the current state to The animation of Shome moves the current page of the desktop to the homepage specified by the user.
  • Step 306 Reset all gesture recognition states, and wait for the next recognition.
  • the process of recognizing full-screen gestures and state switching by non-desktop gesture consumers includes:
  • Step 400 Monitoring whether the gesture event needs to be consumed. Monitoring rules: The current state can perform full-screen gestures, that is, the desktop is in normal mode (or non-editing mode); whether the press event occurs in the gesture recognition hot zone; the foreground application is a non-desktop application.
  • the gesture recognition hot zone refers to a predefined area, which can be any area on the screen.
  • Step 401 When it is detected that a gesture event needs to be consumed, it is necessary to first identify a press (EVENT_DOWN) event, a movement event and a lift event in the gesture event, record the position coordinate Pdown of the EVENT_DOWN event, and initialize the first preset distance threshold.
  • the first preset distance threshold is defined here as Tslide.
  • the first preset distance threshold is related to the current display size (that is, the number of pixels included in each dp) and the current display direction of the screen.
  • Step 402 When the press event is received, switch the current state to the state Srecent of entering the most recent application.
  • Step 403 When the gesture event includes a pressing event, a moving event, and a lifting event, determine whether the moving distance of the lifting event relative to the pressing event is greater than a first preset distance threshold. When the moving distance is greater than the first preset distance threshold, it is determined that the current state needs to be switched to the recently applied state Srecent, and the motion effect from the current state to Srecent is executed; when the moving distance is less than or equal to the first preset distance threshold, continue Go to step 404.
  • Step 404 Determine whether the moving speed of the lifting event relative to the pressing event is greater than a preset speed threshold.
  • the moving speed is greater than the preset speed threshold, it is determined that the current state needs to be switched to the desktop Shome, the motion effect of switching from the current state to the Shome is executed, and the current page of the desktop is moved to the homepage specified by the user; when the moving speed is less than or equal to the preset
  • step 407 is executed.
  • Step 405 When the gesture event only includes the press event and the movement event, the user's action is still running, that is, the EVENT_UP event is not captured, that is, when the user does not raise his hand at this time, it is necessary to respond to subsequent events Identify again.
  • the specific identification step includes step 406.
  • Step 406 When the movement distance of the last movement event relative to the press event is greater than the first preset distance threshold, the judgment conditions above cannot be processed at this time, and the most recent parameter information of the gesture event needs to be retrieved from the previous buffer.
  • Parameter information of n gesture events monitor whether the sum of the movement distances of n gesture events Dad is less than the second preset distance threshold, when the sum of the movement distances of n gesture events Dad is less than the second preset distance threshold, determine The current state needs to be switched to the state of entering the most recent application Srecent, and the motion effect of switching from the current state to the state of entering the most recent application is executed, and all the gesture recognition states are reset, without waiting for the EVENT_UP event; when the movement distance of n gesture events When the sum is greater than or equal to the second preset distance threshold, return to step 400 to continue execution.
  • step 400 When the movement distance of the last gesture event relative to the press event is less than or equal to the first preset distance threshold, or the movement speed of the last movement event relative to the press event is less than or equal to the preset speed threshold, return to step 400 to continue monitoring Move (EVENT_MOVE) or EVENT_UP event.
  • the mark bit When the movement speed of the last movement event relative to the pressing event is greater than the preset speed threshold, the mark bit is recorded and the event is consumed; when the mark bit is T, it is determined that the current state needs to be switched to the desktop Shome, and the execution is switched from the current state to The animation of Shome moves the current page of the desktop to the homepage specified by the user.
  • Step 407 Reset all gesture recognition states, and wait for the next recognition.
  • another embodiment of the present invention provides a gesture recognition device, which includes a processor 701 and a computer-readable storage medium 702.
  • the computer-readable storage medium 702 stores instructions. When the instructions are When executed, the processor 701 implements any of the aforementioned gesture recognition methods.
  • Another embodiment of the present invention provides a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, the steps of any of the aforementioned gesture recognition methods are implemented.
  • another embodiment of the present invention provides a gesture recognition device, including:
  • the event capture module 501 is used to implement at the system level: to capture gesture events to obtain parameter information of the gesture events;
  • the gesture recognition module 502 is configured to implement through a desktop application: when it is determined that the gesture event meets the requirements of full-screen gesture recognition according to the parameter information of the gesture event, perform full-screen gesture recognition according to the parameter information of the gesture event;
  • the state switching module 503 is used for implementing through a desktop application: performing state switching according to the recognized full-screen gesture.
  • a high-level window ie, gesture recognition hot zone
  • the solution can recognize full-screen gestures of desktop applications and full-screen gestures of non-desktop applications, and will not be unable to recognize full-screen gestures because the foreground application is a non-desktop application.
  • the gesture event may be a touch event or a non-touch event, which is not limited in the embodiment of the present invention.
  • the parameter information of the gesture event is contained in the MotionEvent object.
  • the parameter information of the gesture event includes all parameter information of the user operating the touch screen, including position, event occurrence, event type, finger information on which the touch occurred, and so on. These parameters are important parameters for gesture recognition.
  • the position refers to the positions of all touch points of the gesture event.
  • occurrence of an event refers to whether an event has occurred:
  • the event type refers to whether the event that occurs is a down event, a move event, or an up event.
  • the finger information on which the touch occurred refers to the number of fingers on which the touch occurred, and so on.
  • full-screen gestures refer to actions originally defined by the system layer, mainly switching to the desktop application (Home), displaying the recently used application (Recent), and returning to the recently used application (Back).
  • the full-screen gesture recognition requirements include: the finger information of the touch is 1 and the movement direction of the gesture event is a preset direction. Among them, the preset direction is up, down, left, right and so on.
  • the gesture recognition module 502 is specifically configured to implement full-screen gesture recognition according to the parameter information of the gesture event in the following manner:
  • the gesture event When it is detected that the gesture event needs to be consumed, and the gesture event includes a press event, a movement event, and a lift event, determine the movement distance of the lift event relative to the press event according to the parameter information of the lift event and / Or moving speed;
  • the full-screen gesture is recognized according to the moving distance and/or moving speed of the lifting event relative to the pressing event.
  • the gesture recognition module 502 is specifically configured to determine that a gesture event needs to be consumed according to the following conditions:
  • the current state is in normal mode or non-editing mode, and the press event occurs in the gesture recognition hot zone.
  • the gesture recognition hot zone refers to a predefined area, which may be any area on the screen.
  • the state switching module 503 is further configured to: when the foreground application is a non-desktop application when the gesture event is captured, and the gesture event is a press event, switch the current state to enter recent The status of the application.
  • the gesture recognition module 502 is specifically configured to use at least one of the following methods to realize the recognition of a full-screen gesture according to the moving distance and/or moving speed of the lifting event relative to the pressing event:
  • determining that the gesture is a first gesture, such as a pull-up gesture
  • determining that the gesture is a second gesture, such as a quick pull-up gesture
  • the process is ended or the relevant parameters of gesture recognition are reinitialized, such as the current state of the mobile terminal, the first preset distance threshold, the second preset distance threshold, and the preset speed threshold, etc. .
  • the state switching module 503 is specifically configured to use at least one of the following methods to implement the state switching according to the recognized full-screen gesture:
  • the gesture is the first gesture, such as a pull-up gesture, it is determined that the current state needs to be switched to the state of entering the most recent application, and the motion effect of switching from the current state to the state of entering the most recent application is executed;
  • the gesture is the second gesture, such as a quick pull-up gesture
  • the motion effect of switching from the current state to the desktop is executed, and the current page of the desktop is moved to the designated homepage .
  • the first preset distance threshold is related to the display size of the current screen (that is, the number of pixels contained in each dp) and the display direction of the screen.
  • the position coordinate Pdown of the press event needs to be recorded.
  • the initialization module 504 is further configured to: after performing state switching according to the recognized full-screen gesture, re-initialize related parameters of gesture recognition, such as the current state of the mobile terminal, the first preset distance threshold, The second preset distance threshold and the preset speed threshold, etc.
  • the state switching module 503 is further used to: synchronize the current state.
  • the gesture recognition module 502 is also used to:
  • the gesture event includes only the press event and the movement event
  • the movement distance of the last captured movement event relative to the press event is greater than the first preset distance threshold, according to the last captured n
  • the parameter information of the movement event determines the sum of the movement distances of n movement events
  • determining that the gesture is a third gesture, such as a pause gesture
  • the state switching module 503 is also used for:
  • the gesture is a third gesture, such as a pause gesture, it is determined that the current state needs to be switched to the state of entering the most recent application, and the motion effect of switching from the current state to the state of entering the most recent application is executed.
  • the gesture recognition module 502 is also used to:
  • the gesture event includes only the press event and the movement event
  • the movement speed of the last captured movement event relative to the press event is greater than the preset speed threshold, it is determined that the full-screen gesture is the second gesture
  • the state switching according to the recognized full-screen gesture includes:
  • the full-screen gesture is the second gesture, it is determined that the current state needs to be switched to the desktop, the motion effect of switching from the current state to the desktop is executed, and the current page of the desktop is moved to the designated homepage.
  • the initialization module 504 is specifically used for:
  • the current navigation bar type is a full-screen gesture type
  • the current state of the mobile terminal includes: the screen size of the mobile terminal, the current horizontal and vertical screen state of the mobile terminal, and the gesture recognition hot zone parameter of the mobile terminal.
  • the current horizontal and vertical screen state of the mobile terminal includes: horizontal screen state or vertical screen state;
  • the gesture recognition hot zone of a mobile terminal refers to a rectangular space defined in association with the size of the mobile terminal + the horizontal and vertical screen state of the screen.
  • the initialization module 504 is also used to:
  • Register a monitoring object monitor whether the navigation bar type is switched through the monitoring object, and when the navigation bar type is monitored to switch, re-execute the step of obtaining the current navigation bar type from the system.
  • the first point of user operation the user’s operation is planned and designed uniformly, and the recognition and state switching of the full-screen gestures are placed on the application side.
  • the visual effect design space of each page Larger, you can design more visual effects and interaction scenarios that are more in line with user operations, and interact with the system level. Through the system level, other visual effect resources are controlled to be coordinated in full-screen gesture recognition and state switching, so as not to be out of touch.
  • the gesture recognition device further includes an event distribution module 505, which is used to identify the current operation scene, and distribute the parameter information of the gesture event to the gesture recognition child of the desktop application according to the current operation scene.
  • Module 601 is also the gesture recognition sub-module 602 of non-desktop applications;
  • the parameter information of the gesture event is distributed to the gesture recognition submodule 601 of the desktop application; when the current operation scene is that the non-desktop application is in the foreground, it is recognized In the case of full-screen gestures, the parameter information of the gesture event is distributed to the gesture recognition submodule 602 of the non-desktop application;
  • the gesture recognition module 502 includes a gesture recognition sub-module 601 for desktop applications and a gesture recognition sub-module 602 for non-desktop applications;
  • the gesture recognition sub-module 601 of the desktop application is used to perform full-screen gesture recognition on gesture events captured by the desktop application in the foreground;
  • the gesture recognition sub-module 602 of non-desktop applications is used to perform full-screen gesture recognition on gesture events captured by non-desktop applications in the foreground;
  • the state switching module 503 includes a state control sub-module 603 and a motion control sub-module 604;
  • the state control sub-module 603 is configured to perform state switching and perform state synchronization according to the gesture recognition result of the gesture recognition sub-module 601 of the desktop application or the gesture recognition sub-module 602 of the non-desktop application;
  • the motion effect control sub-module 604 is used to control the motion effect in the state switching process.
  • the embodiments of the present invention include: through the system level: the gesture event is captured to obtain the parameter information of the gesture event; through the desktop application: when the gesture event is determined to meet the full-screen gesture recognition requirements according to the parameter information of the gesture event, The parameter information recognizes the full-screen gestures, and switches the state according to the recognized full-screen gestures.
  • the first point of user operation the user’s operation is planned and designed uniformly, and the recognition and state switching of the full-screen gestures are placed on the application side.
  • the visual effect design space of each page Larger, you can design more visual effects and interaction scenarios that are more in line with user operations, and interact with the system level.
  • other visual effect resources are controlled to be coordinated in full-screen gesture recognition and state switching, so as not to be out of touch.
  • recognizing the full-screen gesture according to the movement distance and/or the movement speed of the movement event relative to the press event includes at least one of the following: when the gesture event is captured, the foreground application is a desktop application, The movement distance and/or movement speed of the movement event relative to the pressing event recognizes the full-screen gesture of the desktop application; when the foreground application is a non-desktop application when the gesture event is captured, the movement event is relative to the pressing event The movement distance and/or movement speed of the system recognizes full-screen gestures for non-desktop applications.
  • the solution can recognize full-screen gestures of desktop applications and full-screen gestures of non-desktop applications, and will not be unable to recognize full-screen gestures because the foreground application is a non-desktop application.
  • Such software may be distributed on a computer-readable medium
  • the computer-readable medium may include a computer storage medium (or non-transitory medium) and a communication medium (or transitory medium).
  • the term computer storage medium includes volatile and non-volatile memory implemented in any method or technology for storing information (such as computer-readable instructions, data structures, program modules, or other data).
  • Computer storage media include but are not limited to RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disk (DVD) or other optical disk storage, magnetic cassette, tape, magnetic disk storage or other magnetic storage devices, or Any other medium used to store desired information and that can be accessed by a computer.
  • communication media usually contain computer-readable instructions, data structures, program modules, or other data in a modulated data signal such as carrier waves or other transmission mechanisms, and may include any information delivery media .

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

一种手势识别方法、装置及计算机可读存储介质,包括:进行手势事件的捕捉获得手势事件的参数信息(100);通过桌面应用实现:当根据手势事件的参数信息确定手势事件满足全面屏手势识别要求时,根据手势事件的参数信息进行全面屏手势的识别,根据识别到的全面屏手势进行状态切换(101)。

Description

一种手势识别方法、装置及计算机可读存储介质
相关申请的交叉引用
本申请基于申请号为201910596336.1、申请日为2019年7月3日的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此引入本申请作为参考。
技术领域
本发明实施例涉及但不限于终端技术,尤指一种手势识别方法、装置及计算可读机存储介质。
背景技术
智能终端中用户操作按钮的发展经历了从物理按键的操控过渡到最大程度利用屏幕的软件虚拟按键,最新的发展趋势便是,将整个屏幕都作为用户可以操作的空间,完全摒弃在屏幕中占用位置的按钮提示,直接识别用户的手势,来响应用户的操作。这样,软件便可以充分利用手机屏幕的尺寸,最大程度的展示自己的界面,大大提升用户体验。相关的全面屏手势识别方案主要有两种,一种是直接由系统层面实现,处理效率较高,但是与桌面的交互性不高,脱节严重;另一种是在应用端实现,应用本身相比于系统层面实现方案更加灵活,可以自由的实现动画,与用户交互性更强,与系统脱节,一旦应用处于非激活状态则无法识别手势。
发明内容
本发明实施例提供了一种手势识别方法和装置,旨在至少在一定程度上解决相关技术中的技术问题之一。
本发明实施例提供了一种手势识别方法,包括:通过系统层面实现:进行手势事件的捕捉获得手势事件的参数信息;通过桌面应用实现:当根据手势事件的参数信息确定手势事件满足全面屏手势识别要求时,根据手势事件的参数信息进行全面屏手势的识别,根据识别到的全面屏手势进行状态切换。
本发明实施例提供了一种手势识别装置,包括:事件捕捉模块,用于通过系统层面实现:进行手势事件的捕捉获得手势事件的参数信息;手势识别模块,用于通过桌面应用实现:当根据手势事件的参数信息确定手势事件满足全面屏手势识别要求时,根据手势事件的参数信息进行全面屏手势的识别;状态切换模块,用于通过桌面应用实现:根据识别到 的全面屏手势进行状态切换。
本发明实施例提供了一种手势识别装置,包括处理器和计算机可读存储介质,所述计算机可读存储介质中存储有指令,当所述指令被所述处理器执行时,实现上述任一种手势识别方法。
本发明实施例提供了一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现上述任一种手势识别方法的步骤。
本发明实施例的其它特征和优点将在随后的说明书中阐述,并且,部分地从说明书中变得显而易见,或者通过实施本发明实施例而了解。本发明实施例的目的和其他优点可通过在说明书、权利要求书以及附图中所特别指出的结构来实现和获得。
附图说明
附图用来提供对本发明实施例技术方案的进一步理解,并且构成说明书的一部分,与本发明实施例一起用于解释本发明实施例的技术方案,并不构成对本发明实施例技术方案的限制。
图1为本发明一个实施例提出的手势识别方法的流程图;
图2为本发明实施例手势识别过程示例性流程图;
图3为本发明实施例通过桌面手势消耗者进行手势识别和状态切换过程示例性流程图;
图4为本发明实施例通过非桌面手势消耗者进行手势识别和状态切换过程示例性流程图;
图5为本发明另一个实施例提出的手势识别装置的结构组成示意图;
图6为本发明实施例手势识别装置的示例性示意图;
图7为本发明实施例手势识别装置的硬件结构示意图。
具体实施方式
下文中将结合附图对本发明实施例进行详细说明。需要说明的是,在不冲突的情况下,本发明中的实施例及实施例中的特征可以相互任意组合。
在附图的流程图示出的步骤可以在诸如一组计算机可执行指令的计算机系统中执行。并且,虽然在流程图中示出了逻辑顺序,但是在某些情况下,可以以不同于此处的顺序执行所示出或描述的步骤。
随着智能手机的普及,全球智能终端的使用群体将超过20亿,这样巨大的数量,用户对差异化的需求也与日俱增。在这样的变化下,各种各样的个性差异化产品应接不暇的出现,从种类繁多的手机样式、颜色的硬件层面的变化,到各种定制化系统与软件等都是在最大程度上满足用户的差异化需求。目前主流的智能终端都退出了自己的全面屏设计方 案,这个方案产生的初衷,是将整个屏幕都作为用户可以操作的空间,完全摒弃在屏幕中占用位置的按钮提示,将更大的空间留给应用,给用户更加优质的视觉享受。但是,主流的设计方案上,在交互的一致性与流畅性上存在一些改进的空间。
目前主流的全面屏手势识别方案主要有以下两种。
一、直接在系统层面实现手势事件的捕捉、手势的识别和状态的切换。在全面屏模式下,直接从系统层面完成所有状态的控制。这个方案的优点是,在系统层面解决问题,事件的处理上有优先级上的优势,即第一时间处理了事件,避免了层层的传递,是效率最高的方式。然而,这个方案的缺点也是显而易见的。在系统层面与各个涉及的应用的交互性很差,虽然最后状态的切换是由应用侧来实现的,但是具体的切换过程仍然是系统层面控制的,应用侧只能按照系统层面的控制指令执行,系统层面不能细致的控制界面中子界面的变化状态,状态的切换过程视效比较粗糙,导致有些优化用户体验的交互与场景设计是不易实现的,实现的代价很高。实现过程中多个界面与应用的交互大大增加了开销,将这种实现方案实现的优势逐渐变成劣势。在竞争激烈的市场环境中,一种功能实现的最终目标,还是提供良好的用户体验与舒适的用户使用场景,所以这种方案没有了竞争力,相比之下第二种实现方案就更具有市场竞争力。
二、在应用侧或应用端实现手势事件的捕捉、手势的识别和状态的切换。这种方案有更大的灵活性,在视觉效果设计与场景设计阶段,灵活性也更大,但是,应用侧实现的全面屏手势识别的方案同样存在脱节的问题,虽然视效交互过程更加优异,但是系统切换中存在缺陷,即在脱离应用管理的其他应用中,一旦应用处于非激活状态则无法捕捉手势事件,从而无法实现手势的识别,导致手势识别失败。
参见图1,本发明一个实施例提出了一种手势识别方法,包括:
步骤100、通过系统层面实现:进行手势事件的捕捉获得手势事件的参数信息。
在本发明实施例中,在桌面最上层通过一个高层级的窗口(即手势识别热区)来实现手势事件的捕捉。该方案可以识别桌面应用的全面屏手势和非桌面应用的全面屏手势,不会因为前台应用为非桌面应用而无法实现全面屏手势的识别。
在本发明实施例中,手势事件可以是触摸事件,也可以是非触摸事件,本发明实施例对此不作限定。
在本发明实施例中,手势事件的参数信息包含在MotionEvent对象中。
在本发明实施例中,手势事件的参数信息包括用户操作触摸屏的所有参数信息,包括位置、发生事件、事件类型、触摸发生的手指信息等。这些参数都是手势识别的重要参数。
在一个示例性实例中,位置是指手势事件的所有触摸点的位置。
在一个示例性实例中,发生事件是指是否发生事件:
在一个示例性实例中,事件类型是指发生的事件是按下(down)事件还是移动(move)事件还是抬起(up)事件。
在一个示例性实例中,触摸发生的手指信息是指触摸发生的手指数量等。
步骤101、通过桌面应用实现:当根据手势事件的参数信息确定手势事件满足全面屏手势识别要求时,根据手势事件的参数信息进行全面屏手势的识别,根据识别到的全面屏手势进行状态切换。
在本发明实施例中,全面屏手势是指原来通过系统层定义的动作,主要为切换到桌面应用(Home)、显示最近使用的应用(Recent)和返回最近使用的应用(Back)。
在一个示例性实例中,全面屏手势识别要求包括:触摸发生的手指信息为1,且手势事件的移动方向为预设方向。其中,预设方向如向上、向下、向左、向右等等。
在本发明实施例中,根据手势事件的参数信息进行全面屏手势的识别包括:
当监测到所述手势事件需要消耗,且所述手势事件包括按下事件、移动事件和抬起事件时,根据抬起事件的参数信息确定所述抬起事件相对于按下事件的移动距离和/或移动速度;
根据所述抬起事件相对于按下事件的移动距离和/或移动速度识别所述全面屏手势。
在一个示例性实例中,根据以下情况确定监测到手势事件需要消耗:
当前状态处于正常模式或非编辑模式,且按下事件发生在手势识别热区。
在一个示例性实例中,手势识别热区是指预先定义的区域,可以是屏幕上的任何一个区域。
在本发明另一个实施例中,当捕捉到所述手势事件时前台应用为非桌面应用,且所述手势事件为按下事件时,该方法还包括:将当前状态切换为进入最近应用的状态。
在本发明实施例中,根据抬起事件相对于按下事件的移动距离和/或移动速度识别全面屏手势包括以下至少之一:
当所述移动距离大于第一预设距离阈值时,确定所述手势为第一手势,如上拉手势;
当所述移动距离小于或等于所述第一预设距离阈值,且所述移动速度大于预设速度阈值时,确定所述手势为第二手势,如快速上拉手势;
当移动速度小于或等于预设速度阈值时,结束本流程或重新初始化手势识别的相关参数,如当前移动终端的状态、第一预设距离阈值、第二预设距离阈值和预设速度阈值等。
所述根据识别到的手势进行状态切换包括以下至少之一:
当所述手势为第一手势,如上拉手势时,确定当前状态需要切换至进入最近应用的状 态,执行从所述当前状态切换至进入最近应用的状态的动效;
当所述手势为第二手势,如快速上拉手势时,确定所述当前状态需要切换至桌面,执行从所述当前状态切换至桌面的动效,将桌面的当前页面移动到指定的主页。
在一个示例性实例中,第一预设距离阈值与当前屏幕的显示大小(即每个dp中包含的像素个数)、以及屏幕的显示方向有关。
在一个示例性实例中,需要记录按下事件的位置坐标Pdown。
在本发明另一个实施例中,根据识别到的全面屏手势进行状态切换后,该方法还包括:重新初始化手势识别的相关参数,如当前移动终端的状态、第一预设距离阈值、第二预设距离阈值和预设速度阈值等。
在本发明另一个实施例中,该方法还包括:进行当前状态的同步。
在本发明另一个实施例中,当所述手势事件仅包括所述按下事件和移动事件时,所述根据手势事件的参数信息进行全面屏手势的识别还包括:
当最后捕捉到的移动事件相对于按下事件的移动距离大于所述第一预设距离阈值时,根据最后捕捉到的n个移动事件的参数信息确定n个移动事件的移动距离之和;
当n个移动事件的移动距离之和小于第二预设距离阈值时,确定所述手势为第三手势,如停顿手势;
所述根据识别到的手势进行状态切换包括:
当所述手势为第三手势,如停顿手势时,确定当前状态需要切换至进入最近应用的状态,执行从所述当前状态切换至进入最近应用的状态的动效。
在本发明另一个实施例中,当所述手势事件仅包括所述按下事件和移动事件时,所述根据手势事件的参数信息进行全面屏手势的识别还包括:
当最后捕捉到的移动事件相对于按下事件的移动速度大于所述预设速度阈值时,确定所述全面屏手势为第二手势;
所述根据识别到的全面屏手势进行状态切换包括:
当所述全面屏手势为第二手势时,确定所述当前状态需要切换至桌面,执行从所述当前状态切换至桌面的动效,将桌面的当前页面移动到指定的主页。
在本发明另一个实施例中,根据识别到的全面屏手势进行状态切换后,该方法还包括:将所有的手势识别状态复位。
在本发明另一个实施例中,进行手势事件的捕捉获得手势事件的参数信息之前,该方法还包括:对整个系统进行初始化设置。
具体的,对整个系统进行初始化设置包括:
在桌面应用启动时,从系统中获取当前的导航栏类型;
当所述当前的导航栏类型为全面屏的手势类型时,初始化以下参数:目前桌面是否在前台(用户是否肉眼可见),系统的事件分发管道是否连接(服务方式连接,若未连接需要请求连接),内部事件分发的模块管道是否初始化(如果没有需要创建变传递初始化参数),当前移动终端的状态,第一预设距离阈值,第二预设距离阈值和预设速度阈值。
其中,当前移动终端的状态包括:移动终端的屏幕尺寸,移动终端当前的横竖屏状态,移动终端的手势识别热区参数。
其中,移动终端当前的横竖屏状态包括:横屏状态或竖屏状态;
移动终端的手势识别热区是指与移动终端的尺寸+屏幕横竖屏状态关联定义的一个矩形空间。
在本发明另一个实施例中,所述对整个系统进行初始化设置还包括:
注册监控对象,通过所述监控对象监听所述导航栏类型是否发生切换,当监控到所述导航栏类型发生切换时,重新执行从系统中获取所述当前的导航栏类型的步骤。
本发明实施例在用户操作的第一发起点—桌面应用,将用户的操作进行了统一的规划与设计,将全面屏手势的识别与状态切换放在了应用端,各个页面的视效设计空间更大,可以设计更多,更符合用户操作的视效与交互场景,并与系统层面进行交互,通过系统层面控制其他视效资源在全面屏手势识别与状态切换中协调一致,不至于脱节。
在一个示例性实例中,如图2所示,手势识别过程包括:
步骤200、在系统层面捕捉到手势事件,生成MotionEvent对象,MotionEvent对象中包含手势事件的参数信息。
本步骤中,手势事件的参数信息包括用户操作触摸屏的所有参数信息,包括位置、发生事件、事件类型、触摸发生的手指信息等。这些参数都是手势识别的重要参数。
在一个示例性实例中,位置是指手势事件的所有触摸点的位置。
在一个示例性实例中,发生事件是指是否发生事件:
在一个示例性实例中,事件类型是指发生的事件是按下(down)事件还是移动(move)事件还是抬起(up)事件。
在一个示例性实例中,触摸发生的手指信息是指触摸发生的手指数量等。
步骤201、当满足全面屏手势识别要求时,根据当前的操作场景选择手势事件消耗者,将MotionEvent对象转发给选择的手势事件消耗者。具体的,当当前的操作场景为前台应用为桌面应用时,选择桌面手势消耗者,将MotionEvent对象转发给桌面手势消耗者,并继续执行步骤202、步骤204;当当前的操作场景为前台应用为非桌面应用时,选择非桌 面手势消耗者,将MotionEvent对象转发给非桌面手势消耗者,并继续执行步骤203、步骤204。
在一个示例性实例中,操作场景包括以下至少之一:
桌面应用在前台时识别全面屏手势,也就是在桌面上识别全面屏手势;
非桌面应用在前台时识别全面屏手势,也就是用户打开了非桌面应用,且这个非桌面应用处于激活的状态。
步骤202、通过桌面手势消耗者进行全面屏手势的识别和状态切换。
步骤203、通过非桌面手势消耗者进行全面屏手势的识别和状态切换。
步骤204、进行状态同步。具体的,将切换后的状态同步给系统和其他应用。
在一个示例性实例中,如图3所示,通过桌面手势消耗者进行全面屏手势的识别和状态切换过程包括:
步骤300、监测是否需要消耗手势事件。监测的规则:当前的状态可以执行全面屏手势,即桌面处于正常模式(或非编辑模式);按下事件是否发生在手势识别热区。
本步骤中,手势识别热区是指预先定义的区域,可以是屏幕上的任何一个区域。
步骤301、当监测到需要消耗手势事件时,需要首先识别出手势事件中的按下(EVENT_DOWN)事件、移动事件和抬起事件,记录EVENT_DOWN事件的位置坐标Pdown,初始化第一预设距离阈值,此处定义第一预设距离阈值为Tslide。
本步骤中,第一预设距离阈值与当前的显示大小(即每个dp中包含的像素个数),目前屏幕的显示方向有关。
步骤302、当手势事件包括按下事件、移动事件和抬起事件时,判断抬起事件相对于按下事件的移动距离是否大于第一预设距离阈值。当移动距离大于第一预设距离阈值时,确定当前状态需要切换至进入最近应用的状态Srecent,执行从当前状态到Srecent的动效;当移动距离小于或等于第一预设距离阈值时,继续执行步骤303。
步骤303、判断抬起事件相对于按下事件的移动速度是否大于预设速度阈值。当移动速度大于预设速度阈值时,确定当前的状态需要切换至桌面Shome,执行从当前状态切换到Shome的动效,将桌面的当前页面移动到用户指定的主页;当移动速度小于或等于预设速度阈值时,执行步骤306。
步骤304、当手势事件仅包括按下事件和移动事件时,用户的动作还在运行中,即没有捕捉到抬起(EVENT_UP)事件,即此时用户并没有抬手时,需要对后续的事件再一次进行识别。具体的识别步骤包括步骤305。
步骤305、当最后一个移动事件相对于按下事件的移动距离大于第一预设距离阈值时, 此时从上面的判断条件无法处理,此时需要从之前缓存的手势事件的参数信息中取出最近的n个手势事件的参数信息,监测n个手势事件的移动距离之和Dad是否小于第二预设距离阈值,当n个手势事件的移动距离之和Dad小于第二预设距离阈值时,确定当前状态需要切换为进入最近应用的状态Srecent,执行从当前状态切换到进入最近应用的状态的动效,将所有的手势识别状态复位,不需要继续等待EVENT_UP事件;当n个手势事件的移动距离之和大于或等于第二预设距离阈值时,返回步骤300继续执行。
当最后一个手势事件相对于按下事件的移动距离小于或等于第一预设距离阈值,或最后一个移动事件相对于按下事件的移动速度小于或等于预设速度阈值时,返回步骤300继续监听移动(EVENT_MOVE)或EVENT_UP事件。
当最后一个移动事件相对于按下事件的移动速度大于预设速度阈值时,记录标记位,消耗事件;当标记位为T时,确定当前的状态需要切换至桌面Shome,执行从当前状态切换到Shome的动效,将桌面的当前页面移动到用户指定的主页。
步骤306、将所有的手势识别状态复位,等待下一次识别。
在一个示例性实例中,如图4所示,通过非桌面手势消耗者进行全面屏手势的识别和状态切换过程包括:
步骤400、监测是否需要消耗手势事件。监测的规则:当前的状态可以执行全面屏手势,即桌面处于正常模式(或非编辑模式);按下事件是否发生在手势识别热区;前台应用为非桌面应用。
本步骤中,手势识别热区是指预先定义的区域,可以是屏幕上的任何一个区域。
步骤401、当监测到需要消耗手势事件时,需要首先识别出手势事件中的按下(EVENT_DOWN)事件、移动事件和抬起事件,记录EVENT_DOWN事件的位置坐标Pdown,初始化第一预设距离阈值,此处定义第一预设距离阈值为Tslide。
本步骤中,第一预设距离阈值与当前的显示大小(即每个dp中包含的像素个数),目前屏幕的显示方向有关。
步骤402、当接收到按下事件时,将当前状态切换至进入最近应用的状态Srecent。
步骤403、当手势事件包括按下事件、移动事件和抬起事件时,判断抬起事件相对于按下事件的移动距离是否大于第一预设距离阈值。当移动距离大于第一预设距离阈值时,确定当前状态需要切换至进入最近应用的状态Srecent,执行从当前状态到Srecent的动效;当移动距离小于或等于第一预设距离阈值时,继续执行步骤404。
步骤404、判断抬起事件相对于按下事件的移动速度是否大于预设速度阈值。当移动速度大于预设速度阈值时,确定当前的状态需要切换至桌面Shome,执行从当前状态切换 到Shome的动效,将桌面的当前页面移动到用户指定的主页;当移动速度小于或等于预设速度阈值时,执行步骤407。
步骤405、当手势事件仅包括按下事件和移动事件时,用户的动作还在运行中,即没有捕捉到抬起(EVENT_UP)事件,即此时用户并没有抬手时,需要对后续的事件再一次进行识别。具体的识别步骤包括步骤406。
步骤406、当最后一个移动事件相对于按下事件的移动距离大于第一预设距离阈值时,此时从上面的判断条件无法处理,此时需要从之前缓存的手势事件的参数信息中取出最近的n个手势事件的参数信息,监测n个手势事件的移动距离之和Dad是否小于第二预设距离阈值,当n个手势事件的移动距离之和Dad小于第二预设距离阈值时,确定当前状态需要切换为进入最近应用的状态Srecent,执行从当前状态切换到进入最近应用的状态的动效,将所有的手势识别状态复位,不需要继续等待EVENT_UP事件;当n个手势事件的移动距离之和大于或等于第二预设距离阈值时,返回步骤400继续执行。
当最后一个手势事件相对于按下事件的移动距离小于或等于第一预设距离阈值,或最后一个移动事件相对于按下事件的移动速度小于或等于预设速度阈值时,返回步骤400继续监听移动(EVENT_MOVE)或EVENT_UP事件。
当最后一个移动事件相对于按下事件的移动速度大于预设速度阈值时,记录标记位,消耗事件;当标记位为T时,确定当前的状态需要切换至桌面Shome,执行从当前状态切换到Shome的动效,将桌面的当前页面移动到用户指定的主页。
步骤407、将所有的手势识别状态复位,等待下一次识别。
参见图7,本发明另一个实施例提出了一种手势识别装置,包括处理器701和计算机可读存储介质702,所述计算机可读存储介质702中存储有指令,当所述指令被所述处理器701执行时,实现上述任一种手势识别方法。
本发明另一个实施例提出了一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现上述任一种手势识别方法的步骤。
参见图5,本发明另一个实施例提出了一种手势识别装置,包括:
事件捕捉模块501,用于通过系统层面实现:进行手势事件的捕捉获得手势事件的参数信息;
手势识别模块502,用于通过桌面应用实现:当根据手势事件的参数信息确定手势事件满足全面屏手势识别要求时,根据手势事件的参数信息进行全面屏手势的识别;
状态切换模块503,用于通过桌面应用实现:根据识别到的全面屏手势进行状态切换。
在本发明实施例中,在桌面最上层通过一个高层级的窗口(即手势识别热区)来实现 手势事件的捕捉。该方案可以识别桌面应用的全面屏手势和非桌面应用的全面屏手势,不会因为前台应用为非桌面应用而无法实现全面屏手势的识别。
在本发明实施例中,手势事件可以是触摸事件,也可以是非触摸事件,本发明实施例对此不作限定。
在本发明实施例中,手势事件的参数信息包含在MotionEvent对象中。
在本发明实施例中,手势事件的参数信息包括用户操作触摸屏的所有参数信息,包括位置、发生事件、事件类型、触摸发生的手指信息等。这些参数都是手势识别的重要参数。
在一个示例性实例中,位置是指手势事件的所有触摸点的位置。
在一个示例性实例中,发生事件是指是否发生事件:
在一个示例性实例中,事件类型是指发生的事件是按下(down)事件还是移动(move)事件还是抬起(up)事件。
在一个示例性实例中,触摸发生的手指信息是指触摸发生的手指数量等。
在本发明实施例中,全面屏手势是指原来通过系统层定义的动作,主要为切换到桌面应用(Home)、显示最近使用的应用(Recent)和返回最近使用的应用(Back)。
在一个示例性实例中,全面屏手势识别要求包括:触摸发生的手指信息为1,且手势事件的移动方向为预设方向。其中,预设方向如向上、向下、向左、向右等等。
在本发明实施例中,手势识别模块502具体用于采用以下方式实现根据手势事件的参数信息进行全面屏手势的识别:
当监测到所述手势事件需要消耗,且所述手势事件包括按下事件、移动事件和抬起事件时,根据抬起事件的参数信息确定所述抬起事件相对于按下事件的移动距离和/或移动速度;
根据所述抬起事件相对于按下事件的移动距离和/或移动速度识别所述全面屏手势。
在一个示例性实例中,手势识别模块502具体用于根据以下情况确定监测到手势事件需要消耗:
当前状态处于正常模式或非编辑模式,且按下事件发生在手势识别热区。
在一个示例性实例中,手势识别热区是指预先定义的区域,可以是屏幕上的任何一个区域。
在本发明另一个实施例中,状态切换模块503还用于:当捕捉到所述手势事件时前台应用为非桌面应用,且所述手势事件为按下事件时,将当前状态切换为进入最近应用的状态。
在本发明实施例中,手势识别模块502具体用于采用以下至少之一方式实现根据抬起 事件相对于按下事件的移动距离和/或移动速度识别全面屏手势:
当所述移动距离大于第一预设距离阈值时,确定所述手势为第一手势,如上拉手势;
当所述移动距离小于或等于所述第一预设距离阈值,且所述移动速度大于预设速度阈值时,确定所述手势为第二手势,如快速上拉手势;
当移动速度小于或等于预设速度阈值时,结束本流程或重新初始化手势识别的相关参数,如当前移动终端的状态、第一预设距离阈值、第二预设距离阈值和预设速度阈值等。
状态切换模块503具体用于采用以下至少之一方式实现所述根据识别到的全面屏手势进行状态切换:
当所述手势为第一手势,如上拉手势时,确定当前状态需要切换至进入最近应用的状态,执行从所述当前状态切换至进入最近应用的状态的动效;
当所述手势为第二手势,如快速上拉手势时,确定所述当前状态需要切换至桌面,执行从所述当前状态切换至桌面的动效,将桌面的当前页面移动到指定的主页。
在一个示例性实例中,第一预设距离阈值与当前屏幕的显示大小(即每个dp中包含的像素个数)、以及屏幕的显示方向有关。
在一个示例性实例中,需要记录按下事件的位置坐标Pdown。
在本发明另一个实施例中,初始化模块504还用于:根据识别到的全面屏手势进行状态切换后,重新初始化手势识别的相关参数,如当前移动终端的状态、第一预设距离阈值、第二预设距离阈值和预设速度阈值等。
在本发明另一个实施例中,状态切换模块503还用于:进行当前状态的同步。
在本发明另一个实施例中,手势识别模块502还用于:
当所述手势事件仅包括所述按下事件和移动事件时,当最后捕捉到的移动事件相对于按下事件的移动距离大于所述第一预设距离阈值时,根据最后捕捉到的n个移动事件的参数信息确定n个移动事件的移动距离之和;
当n个移动事件的移动距离之和小于第二预设距离阈值时,确定所述手势为第三手势,如停顿手势;
状态切换模块503还用于:
当所述手势为第三手势,如停顿手势时,确定当前状态需要切换至进入最近应用的状态,执行从所述当前状态切换至进入最近应用的状态的动效。
在本发明另一个实施例中,手势识别模块502还用于:
当所述手势事件仅包括所述按下事件和移动事件时,当最后捕捉到的移动事件相对于按下事件的移动速度大于所述预设速度阈值时,确定所述全面屏手势为第二手势;
所述根据识别到的全面屏手势进行状态切换包括:
当所述全面屏手势为第二手势时,确定所述当前状态需要切换至桌面,执行从所述当前状态切换至桌面的动效,将桌面的当前页面移动到指定的主页。
在本发明另一个实施例中,还包括:初始化模块504,用于对整个系统进行初始化设置。
具体的,初始化模块504具体用于:
在桌面应用启动时,从系统中获取当前的导航栏类型;
当所述当前的导航栏类型为全面屏的手势类型时,初始化以下参数:目前桌面是否在前台(用户是否肉眼可见),系统的事件分发管道是否连接(服务方式连接,若未连接需要请求连接),内部事件分发的模块管道是否初始化(如果没有需要创建变传递初始化参数),当前移动终端的状态,第一预设距离阈值,第二预设距离阈值和预设速度阈值。
其中,当前移动终端的状态包括:移动终端的屏幕尺寸,移动终端当前的横竖屏状态,移动终端的手势识别热区参数。
其中,移动终端当前的横竖屏状态包括:横屏状态或竖屏状态;
移动终端的手势识别热区是指与移动终端的尺寸+屏幕横竖屏状态关联定义的一个矩形空间。
在本发明另一个实施例中,初始化模块504还用于:
注册监控对象,通过所述监控对象监听所述导航栏类型是否发生切换,当监控到所述导航栏类型发生切换时,重新执行从系统中获取所述当前的导航栏类型的步骤。
本发明实施例在用户操作的第一发起点—桌面应用,将用户的操作进行了统一的规划与设计,将全面屏手势的识别与状态切换放在了应用端,各个页面的视效设计空间更大,可以设计更多,更符合用户操作的视效与交互场景,并与系统层面进行交互,通过系统层面控制其他视效资源在全面屏手势识别与状态切换中协调一致,不至于脱节。
在一个示例性实例中,如图6所示,手势识别装置还包括事件分发模块505,用于识别当前的操作场景,根据当前的操作场景将手势事件的参数信息分发给桌面应用的手势识别子模块601还是非桌面应用的手势识别子模块602;
具体的,当当前的操作场景为桌面应用在前台时识别全面屏手势时,将手势事件的参数信息分发给桌面应用的手势识别子模块601;当当前的操作场景为非桌面应用在前台时识别全面屏手势时,将手势事件的参数信息分发给非桌面应用的手势识别子模块602;
手势识别模块502包括桌面应用的手势识别子模块601和非桌面应用的手势识别子模块602;
其中,桌面应用的手势识别子模块601,用于对桌面应用在前台时捕捉的手势事件进行全面屏手势的识别;
非桌面应用的手势识别子模块602,用于对非桌面应用在前台时捕捉的手势事件进行全面屏手势的识别;
状态切换模块503包括状态控制子模块603和动效控制子模块604;
其中,状态控制子模块603用于根据桌面应用的手势识别子模块601或非桌面应用的手势识别子模块602的手势识别结果进行状态切换,进行状态同步;
动效控制子模块604用于控制状态切换过程中的动效。
本发明实施例包括:通过系统层面实现:进行手势事件的捕捉获得手势事件的参数信息;通过桌面应用实现:当根据手势事件的参数信息确定手势事件满足全面屏手势识别要求时,根据手势事件的参数信息进行全面屏手势的识别,根据识别到的全面屏手势进行状态切换。本发明实施例在用户操作的第一发起点—桌面应用,将用户的操作进行了统一的规划与设计,将全面屏手势的识别与状态切换放在了应用端,各个页面的视效设计空间更大,可以设计更多,更符合用户操作的视效与交互场景,并与系统层面进行交互,通过系统层面控制其他视效资源在全面屏手势识别与状态切换中协调一致,不至于脱节。
在另一个实施例中,根据移动事件相对于按下事件的移动距离和/或移动速度识别全面屏手势包括以下至少之一:当捕捉到所述手势事件时前台应用为桌面应用时,根据所述移动事件相对于按下事件的移动距离和/或移动速度识别桌面应用的全面屏手势;当捕捉到所述手势事件时前台应用为非桌面应用时,根据所述移动事件相对于按下事件的移动距离和/或移动速度识别非桌面应用的全面屏手势。该方案可以识别桌面应用的全面屏手势和非桌面应用的全面屏手势,不会因为前台应用为非桌面应用而无法实现全面屏手势的识别。
本领域普通技术人员可以理解,上文中所公开方法中的全部或某些步骤、系统、装置中的功能模块/单元可以被实施为软件、固件、硬件及其适当的组合。在硬件实施方式中,在以上描述中提及的功能模块/单元之间的划分不一定对应于物理组件的划分;例如,一个物理组件可以具有多个功能,或者一个功能或步骤可以由若干物理组件合作执行。某些组件或所有组件可以被实施为由处理器,如数字信号处理器或微处理器执行的软件,或者被实施为硬件,或者被实施为集成电路,如专用集成电路。这样的软件可以分布在计算机可读介质上,计算机可读介质可以包括计算机存储介质(或非暂时性介质)和通信介质(或暂时性介质)。如本领域普通技术人员公知的,术语计算机存储介质包括在用于存储信息(诸如计算机可读指令、数据结构、程序模块或其他数据)的任何方法或技术中实施的易 失性和非易失性、可移除和不可移除介质。计算机存储介质包括但不限于RAM、ROM、EEPROM、闪存或其他存储器技术、CD-ROM、数字多功能盘(DVD)或其他光盘存储、磁盒、磁带、磁盘存储或其他磁存储装置、或者可以用于存储期望的信息并且可以被计算机访问的任何其他的介质。此外,本领域普通技术人员公知的是,通信介质通常包含计算机可读指令、数据结构、程序模块或者诸如载波或其他传输机制之类的调制数据信号中的其他数据,并且可包括任何信息递送介质。
虽然本发明实施例所揭露的实施方式如上,但所述的内容仅为便于理解本发明实施例而采用的实施方式,并非用以限定本发明实施例。任何本发明实施例所属领域内的技术人员,在不脱离本发明实施例所揭露的精神和范围的前提下,可以在实施的形式及细节上进行任何的修改与变化,但本发明实施例的专利保护范围,仍须以所附的权利要求书所界定的范围为准。

Claims (13)

  1. 一种手势识别方法,包括:
    通过系统层面实现:进行手势事件的捕捉获得手势事件的参数信息;
    通过桌面应用实现:当根据手势事件的参数信息确定手势事件满足全面屏手势识别要求时,根据手势事件的参数信息进行全面屏手势的识别,根据识别到的全面屏手势进行状态切换。
  2. 根据权利要求1所述的方法,所述进行手势事件的捕捉获得手势事件的参数信息之前,还包括:对整个系统进行初始化设置。
  3. 根据权利要求2所述的方法,其中,所述对整个系统进行初始化设置包括:
    在桌面应用启动时,从系统中获取当前的导航栏类型;
    当所述当前的导航栏类型为全面屏的手势类型时,初始化以下参数:目前桌面是否在前台、系统的事件分发管道是否连接、内部事件分发的模块管道是否初始化、当前移动终端的状态、第一预设距离阈值、第二预设距离阈值和预设速度阈值;
    其中,当前移动终端的状态包括:移动终端的屏幕尺寸、移动终端当前的横竖屏状态、移动终端的手势识别热区参数。
  4. 根据权利要求3所述的方法,其中,所述对整个系统进行初始化设置还包括:
    注册监控对象,通过所述监控对象监听所述导航栏类型是否发生切换,当监控到所述导航栏类型发生切换时,重新执行从系统中获取所述当前的导航栏类型的步骤。
  5. 根据权利要求1~4任一项所述的方法,其中,所述根据手势事件的参数信息进行全面屏手势的识别包括:
    当监测到所述手势事件需要消耗,且所述手势事件包括按下事件、移动事件和抬起事件时,根据抬起事件的参数信息确定所述抬起事件相对于按下事件的移动距离和/或移动速度;根据所述抬起事件相对于按下事件的移动距离和/或移动速度识别所述全面屏手势。
  6. 根据权利要求5所述的方法,其中,根据以下情况确定监测到手势事件需要消耗:
    当前状态处于正常模式或非编辑模式,且按下事件发生在手势识别热区。
  7. 根据权利要求5所述的方法,其中,当捕捉到所述手势事件时前台应用为非桌面应用,且所述手势事件为按下事件时,将当前状态切换为进入最近应用的状态。
  8. 根据权利要求5所述的方法,其中,所述根据抬起事件相对于按下事件的移动距离和/或移动速度识别全面屏手势包括以下至少之一:
    当所述移动距离大于第一预设距离阈值时,确定所述全面屏手势为第一手势;
    当所述移动距离小于或等于所述第一预设距离阈值,且所述移动速度大于预设速度阈值时,确定所述全面屏手势为第二手势;
    所述根据识别到的全面屏手势进行状态切换包括以下至少之一:
    当所述全面屏手势为第一手势时,确定当前状态需要切换至进入最近应用的状态,执行从所述当前状态切换至进入最近应用的状态的动效;
    当所述全面屏手势为第二手势时,确定所述当前状态需要切换至桌面,执行从所述当前状态切换至桌面的动效,将桌面的当前页面移动到指定的主页。
  9. 根据权利要求5所述的方法,其中,当所述手势事件仅包括所述按下事件和移动事件时,所述根据手势事件的参数信息进行全面屏手势的识别还包括:
    当最后捕捉到的移动事件相对于按下事件的移动距离大于所述第一预设距离阈值时,根据最后捕捉到的n个移动事件的参数信息确定n个移动事件的移动距离之和;
    当n个移动事件的移动距离之和小于第二预设距离阈值时,确定所述全面屏手势为第三手势;
    所述根据识别到的手势进行状态切换包括:
    当所述全面屏手势为第三手势时,确定当前状态需要切换至进入最近应用的状态,执行从所述当前状态切换至进入最近应用的状态的动效。
  10. 根据权利要求5所述的方法,其中,当所述手势事件仅包括所述按下事件和移动事件时,所述根据手势事件的参数信息进行全面屏手势的识别还包括:
    当最后捕捉到的移动事件相对于按下事件的移动速度大于所述预设速度阈值时,确定所述全面屏手势为第二手势;
    所述根据识别到的全面屏手势进行状态切换包括:
    当所述全面屏手势为第二手势时,确定所述当前状态需要切换至桌面,执行从所述当前状态切换至桌面的动效,将桌面的当前页面移动到指定的主页。
  11. 一种手势识别装置,包括:
    事件捕捉模块,用于通过系统层面实现:进行手势事件的捕捉获得手势事件的参数信息;
    手势识别模块,用于通过桌面应用实现:当根据手势事件的参数信息确定手势事件满足全面屏手势识别要求时,根据手势事件的参数信息进行全面屏手势的识别;
    状态切换模块,用于通过桌面应用实现:根据识别到的全面屏手势进行状态切换。
  12. 一种手势识别装置,包括处理器和计算机可读存储介质,其中,所述计算机可读存储介质中存储有指令,当所述指令被所述处理器执行时,实现如权利要求1-10任一种 所述的手势识别方法。
  13. 一种计算机可读存储介质,其上存储有计算机程序,其中,所述计算机程序被处理器执行时实现如权利要求1-10任一种所述的手势识别方法的步骤。
PCT/CN2020/093807 2019-07-03 2020-06-01 一种手势识别方法、装置及计算机可读存储介质 WO2021000683A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
US17/624,058 US20220357842A1 (en) 2019-07-03 2020-06-01 Gesture recognition method and device, and computer-readable storage medium
EP20834882.1A EP3985485A4 (en) 2019-07-03 2020-06-01 GESTURE RECOGNITION METHOD AND APPARATUS AND COMPUTER READABLE STORAGE MEDIUM
KR1020217034210A KR20210139428A (ko) 2019-07-03 2020-06-01 제스처 식별 방법, 장치 및 컴퓨터 판독 가능한 저장 매체

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910596336.1A CN112181264A (zh) 2019-07-03 2019-07-03 一种手势识别方法和装置
CN201910596336.1 2019-07-03

Publications (1)

Publication Number Publication Date
WO2021000683A1 true WO2021000683A1 (zh) 2021-01-07

Family

ID=73915483

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/093807 WO2021000683A1 (zh) 2019-07-03 2020-06-01 一种手势识别方法、装置及计算机可读存储介质

Country Status (5)

Country Link
US (1) US20220357842A1 (zh)
EP (1) EP3985485A4 (zh)
KR (1) KR20210139428A (zh)
CN (1) CN112181264A (zh)
WO (1) WO2021000683A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116991302B (zh) * 2023-09-22 2024-03-19 荣耀终端有限公司 应用与手势导航栏兼容运行方法、图形界面及相关装置

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104598308A (zh) * 2014-12-29 2015-05-06 广东欧珀移动通信有限公司 一种模式切换控制方法及装置
CN104978014A (zh) * 2014-04-11 2015-10-14 维沃移动通信有限公司 一种快速调用应用程序或系统功能的方法及其移动终端
CN107844759A (zh) * 2017-10-24 2018-03-27 努比亚技术有限公司 一种手势识别方法、终端及存储介质
US20180285062A1 (en) * 2017-03-28 2018-10-04 Wipro Limited Method and system for controlling an internet of things device using multi-modal gesture commands
CN108958626A (zh) * 2018-06-29 2018-12-07 奇酷互联网络科技(深圳)有限公司 手势识别方法、装置、可读存储介质及移动终端

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8169414B2 (en) * 2008-07-12 2012-05-01 Lim Seung E Control of electronic games via finger angle using a high dimensional touchpad (HDTP) touch user interface
US9311112B2 (en) * 2009-03-16 2016-04-12 Apple Inc. Event recognition
WO2011151501A1 (en) * 2010-06-01 2011-12-08 Nokia Corporation A method, a device and a system for receiving user input
US9710154B2 (en) * 2010-09-03 2017-07-18 Microsoft Technology Licensing, Llc Dynamic gesture parameters
US20120133579A1 (en) * 2010-11-30 2012-05-31 Microsoft Corporation Gesture recognition management
US9372540B2 (en) * 2011-04-19 2016-06-21 Lg Electronics Inc. Method and electronic device for gesture recognition
US10120561B2 (en) * 2011-05-05 2018-11-06 Lenovo (Singapore) Pte. Ltd. Maximum speed criterion for a velocity gesture
US8572515B2 (en) * 2011-11-30 2013-10-29 Google Inc. Turning on and off full screen mode on a touchscreen
WO2013155590A1 (en) * 2012-04-18 2013-10-24 Research In Motion Limited Systems and methods for displaying information or a feature in overscroll regions on electronic devices
US20140137008A1 (en) * 2012-11-12 2014-05-15 Shanghai Powermo Information Tech. Co. Ltd. Apparatus and algorithm for implementing processing assignment including system level gestures
US9965445B2 (en) * 2015-08-06 2018-05-08 FiftyThree, Inc. Systems and methods for gesture-based formatting
US11073980B2 (en) * 2016-09-29 2021-07-27 Microsoft Technology Licensing, Llc User interfaces for bi-manual control
KR102130932B1 (ko) * 2017-05-16 2020-07-08 애플 인크. 사용자 인터페이스들 사이에 내비게이팅하고 제어 객체들과 상호작용하기 위한 디바이스, 방법, 및 그래픽 사용자 인터페이스
AU2019100488B4 (en) * 2018-05-07 2019-08-22 Apple Inc. Devices, methods, and graphical user interfaces for navigating between user interfaces, displaying a dock, and displaying system user interface elements
DK180316B1 (en) * 2018-06-03 2020-11-06 Apple Inc Devices and methods for interacting with an application switching user interface
US10877660B2 (en) * 2018-06-03 2020-12-29 Apple Inc. Devices and methods for processing inputs using gesture recognizers

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104978014A (zh) * 2014-04-11 2015-10-14 维沃移动通信有限公司 一种快速调用应用程序或系统功能的方法及其移动终端
CN104598308A (zh) * 2014-12-29 2015-05-06 广东欧珀移动通信有限公司 一种模式切换控制方法及装置
US20180285062A1 (en) * 2017-03-28 2018-10-04 Wipro Limited Method and system for controlling an internet of things device using multi-modal gesture commands
CN107844759A (zh) * 2017-10-24 2018-03-27 努比亚技术有限公司 一种手势识别方法、终端及存储介质
CN108958626A (zh) * 2018-06-29 2018-12-07 奇酷互联网络科技(深圳)有限公司 手势识别方法、装置、可读存储介质及移动终端

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3985485A4 *

Also Published As

Publication number Publication date
KR20210139428A (ko) 2021-11-22
US20220357842A1 (en) 2022-11-10
CN112181264A (zh) 2021-01-05
EP3985485A4 (en) 2022-08-03
EP3985485A1 (en) 2022-04-20

Similar Documents

Publication Publication Date Title
US10990278B2 (en) Method and device for controlling information flow display panel, terminal apparatus, and storage medium
US10976920B2 (en) Techniques for image-based search using touch controls
KR102213212B1 (ko) 멀티윈도우 제어 방법 및 이를 지원하는 전자 장치
WO2019160665A2 (en) Shared content display with concurrent views
WO2019041779A1 (zh) 终端界面切换、移动、手势处理的方法、装置及终端
US20130326397A1 (en) Mobile terminal and controlling method thereof
US10282022B2 (en) Control method and control device for working mode of touch screen
US20140022190A1 (en) Mobile terminal device, operation method, program, and storage medium
US11740754B2 (en) Method for interface operation and terminal, storage medium thereof
US11590412B2 (en) Information processing method and apparatus, storage medium, and electronic device
US11016645B2 (en) Window split screen display method, device and equipment
KR20150069801A (ko) 화면 제어 방법 및 그 전자 장치
WO2019141119A1 (zh) 用户界面显示方法、装置及设备
WO2020000971A1 (zh) 切换全局特效的方法、装置、终端设备及存储介质
JP2023528311A (ja) ビデオ通話インタフェース表示制御方法、装置、記憶媒体及び機器
WO2015113209A1 (zh) 一种终端设备处理的方法及终端设备
CN110647286A (zh) 屏幕元素控制方法、装置、设备、存储介质
WO2021000683A1 (zh) 一种手势识别方法、装置及计算机可读存储介质
CN105824534B (zh) 一种信息处理方法及电子设备
CN110795015A (zh) 操作提示方法、装置、设备及存储介质
CN110088719B (zh) 移动设备的显示方法和移动设备
WO2020249103A1 (zh) 智能交互设备的控制方法和装置
CN113176846A (zh) 一种图片显示方法、装置,设备及存储介质
WO2021068405A1 (zh) 元素传递方法、装置、设备及存储介质
KR101825442B1 (ko) 스크롤 방법 및 장치

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20834882

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 20217034210

Country of ref document: KR

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2020834882

Country of ref document: EP

Effective date: 20220111