CN111708472A - Gesture implementation method and device on intelligent device, electronic device and storage medium - Google Patents

Gesture implementation method and device on intelligent device, electronic device and storage medium Download PDF

Info

Publication number
CN111708472A
CN111708472A CN202010358529.6A CN202010358529A CN111708472A CN 111708472 A CN111708472 A CN 111708472A CN 202010358529 A CN202010358529 A CN 202010358529A CN 111708472 A CN111708472 A CN 111708472A
Authority
CN
China
Prior art keywords
gesture
finger operation
finger
target view
movement information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010358529.6A
Other languages
Chinese (zh)
Inventor
季焕文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Mobile Communications Group Co Ltd
MIGU Culture Technology Co Ltd
Original Assignee
China Mobile Communications Group Co Ltd
MIGU Culture Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Mobile Communications Group Co Ltd, MIGU Culture Technology Co Ltd filed Critical China Mobile Communications Group Co Ltd
Priority to CN202010358529.6A priority Critical patent/CN111708472A/en
Publication of CN111708472A publication Critical patent/CN111708472A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/048Indexing scheme relating to G06F3/048
    • G06F2203/04806Zoom, i.e. interaction techniques or interactors for controlling the zooming operation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/048Indexing scheme relating to G06F3/048
    • G06F2203/04808Several contacts: gestures triggering a specific function, e.g. scrolling, zooming, right-click, when the user establishes several contacts with the surface simultaneously; e.g. using several fingers or a combination of fingers and pen

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the invention provides a gesture implementation method and device on intelligent equipment, electronic equipment and a storage medium; the method comprises the following steps: identifying single-finger operation of a user on a touch screen of the intelligent equipment, and calculating the movement information of the single-finger operation on a target view; the mobile information comprises offset and/or angle change value; recognizing a gesture state corresponding to single-finger operation; wherein the gesture state comprises a gesture continuous change or a gesture is finished; and determining a first gesture corresponding to the single-finger operation according to the movement information of the single-finger operation on the target view and the gesture state corresponding to the single-finger operation, and executing the operation corresponding to the first gesture. According to the gesture implementation method and device on the intelligent device, the electronic device and the storage medium, the single finger can complete relatively complex operations through the movement information of the single-finger operation and the gesture state corresponding to the single-finger operation, the user operation is simplified, and the view of the user is not blocked.

Description

Gesture implementation method and device on intelligent device, electronic device and storage medium
Technical Field
The invention relates to the technical field of mobile terminal interaction, in particular to a gesture implementation method and device for intelligent equipment, electronic equipment and a storage medium.
Background
On the current intelligent mobile terminal, a user performs operations such as clicking, sliding, dragging and the like on the touch screen, the intelligent mobile terminal captures corresponding information through the touch screen, the gesture of the user can be recognized, and the corresponding operation is completed according to the gesture recognition result.
In the prior art, some gestures can be completed by a single finger, such as clicking, dragging and the like, but other gestures need to be completed by two fingers or even multiple fingers, such as rotating, zooming and the like.
Gesture operation completed by using two or more fingers has certain view shielding, which is not beneficial to observing the operation condition of the current operation object.
Disclosure of Invention
The embodiment of the invention provides a gesture implementation method and device for intelligent equipment, electronic equipment and a storage medium, which are used for solving the defects that a gesture operation method implemented by means of two or more fingers in the prior art has a visual field occlusion and is not beneficial to observing the operation condition of a current operation object.
An embodiment of a first aspect of the present invention provides a method for implementing a gesture on an intelligent device, including:
identifying single-finger operation of a user on a touch screen of the intelligent equipment, and calculating movement information of the single-finger operation on a target view; the mobile information comprises offset and/or angle change values;
recognizing a gesture state corresponding to the single-finger operation; wherein the gesture state comprises a gesture continuous change or a gesture has ended;
and determining a first gesture corresponding to the single-finger operation according to the movement information of the single-finger operation on the target view and the gesture state corresponding to the single-finger operation, and executing the operation corresponding to the first gesture.
In the above technical solution, the calculating the movement information of the single finger operation on the target view includes:
calculating the offset of the single-finger operation on the target view according to the coordinates of the contact point of the finger and the touch screen of the intelligent device;
and/or obtaining the position of the first point on the target view after the single-finger operation according to the offset of the single-finger operation on the target view, and obtaining the angle change value according to the position of the first point on the target view before the single-finger operation and the position after the single-finger operation.
In the foregoing technical solution, the determining, according to the movement information of the single-finger operation on the target view and the gesture state corresponding to the single-finger operation, the first gesture corresponding to the single-finger operation, and the executing the operation corresponding to the first gesture further includes:
when the state is UIGesture RecognizationStateChanged, the gesture state corresponding to the single-finger operation is gesture continuous change, and the angle change value contained in the movement information of the single-finger operation on the target view is smaller than a preset first angle threshold value, determining that the first gesture corresponding to the single-finger operation is zooming operation;
determining a scaling coefficient according to the offset of the single-finger operation on the target view;
and according to the scaling coefficient, calling a pointSize function for setting the font size in the target view and a frame function for describing a structure in the parent view of the target view to realize the scaling of the target view.
In the foregoing technical solution, the determining, according to the movement information of the single-finger operation on the target view and the gesture state corresponding to the single-finger operation, the first gesture corresponding to the single-finger operation, and the executing the operation corresponding to the first gesture further includes:
when the state is UIGesture RecognizationStateChanged, the gesture state corresponding to the single-finger operation is gesture continuous change, and the angle change value contained in the movement information of the single-finger operation on the target view is greater than or equal to a preset first angle threshold value, determining that the first gesture corresponding to the single-finger operation is rotation operation;
obtaining the rotation angle of the target view according to the angle change value of the single finger operation;
and calling a CGAffiniTransformRotate function to rotate the target view according to the rotation angle.
In the foregoing technical solution, the determining, according to the movement information of the single-finger operation on the target view and the gesture state corresponding to the single-finger operation, the first gesture corresponding to the single-finger operation, and the executing the operation corresponding to the first gesture further includes:
when the state is UIGesture RecognizerStateEnded, the gesture state corresponding to the single-finger operation is that the gesture is finished, and the angle change value contained in the movement information of the single-finger operation on the target view is greater than or equal to a preset screen capture trigger angle threshold value, determining that the first gesture corresponding to the single-finger operation is screen capture operation;
and calling a UIGraphic BeginImageContextWithOptions function for operating the context and a renderInContext function for screenshot to realize screenshot operation.
In the foregoing technical solution, the determining, according to the movement information of the single-finger operation on the target view and the gesture state corresponding to the single-finger operation, the first gesture corresponding to the single-finger operation, and the executing the operation corresponding to the first gesture further includes:
when the state is UIGesture RecognizerStateEnded, the gesture state corresponding to the single-finger operation is that the gesture is finished, and the offset contained in the movement information of the single-finger operation on the target view is larger than or equal to a sharing trigger threshold, determining that the first gesture corresponding to the single-finger operation is a sharing operation;
and calling a UIGraphic BeginImageContextWithOptions function for the operation context and a renderInContext function for screenshot to realize screenshot operation, and then sharing the result of the screenshot operation.
In the above technical solution, before the identifying the single-finger operation of the user on the touch screen of the smart device, the method further includes:
setting a first gesture on a control to which a target view belongs, wherein the first gesture is a UIPanGesture Recognizer gesture used for representing a drag operation, and setting the maximum number of supported gestures, maximumNumberOfTouches, of 1 for the first gesture.
An embodiment of a second aspect of the present invention provides a gesture implementation apparatus on an intelligent device, including: the single-finger operation identification module is used for identifying the single-finger operation of a user on the touch screen of the intelligent equipment and calculating the movement information of the single-finger operation on the target view; the mobile information comprises offset and/or angle change values;
the gesture state recognition module is used for recognizing a gesture state corresponding to the single-finger operation; wherein the gesture state comprises a gesture continuous change or a gesture has ended;
and the gesture operation execution module is used for determining a first gesture corresponding to the single-finger operation according to the movement information of the single-finger operation on the target view and the gesture state corresponding to the single-finger operation, and executing the operation corresponding to the first gesture.
In an embodiment of the third aspect of the present invention, an electronic device is provided, which includes a memory, a processor, and a computer program stored on the memory and executable on the processor, and when the processor executes the computer program, the processor implements the steps of the gesture implementation method on the smart device according to the embodiment of the first aspect of the present invention.
A fourth aspect of the present invention provides a non-transitory computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps of the gesture implementation method on a smart device according to the first aspect of the present invention.
According to the gesture implementation method and device on the intelligent device, the electronic device and the storage medium, the single finger can complete relatively complex operations through the movement information of the single-finger operation and the gesture state corresponding to the single-finger operation, the user operation is simplified, and the view of the user is not blocked.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and those skilled in the art can also obtain other drawings according to the drawings without creative efforts.
Fig. 1 is a flowchart of a gesture implementation method on an intelligent device according to an embodiment of the present invention;
fig. 2 is a schematic diagram of a gesture implementation apparatus on an intelligent device according to an embodiment of the present invention;
fig. 3 illustrates a physical structure diagram of an electronic device.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the embodiment of the present invention, an IOS operating system provided by Apple Inc (Apple Inc.) is taken as an example to describe the gesture implementation method on the intelligent device provided by the embodiment of the present invention, but a person skilled in the art should understand that the gesture implementation method on the intelligent device provided by the embodiment of the present invention can also be applied to operating systems of other mobile intelligent terminals, such as an Android operating system.
Fig. 1 is a flowchart of a method for implementing a gesture on an intelligent device according to an embodiment of the present invention, and as shown in fig. 1, the method for implementing a gesture on an intelligent device according to an embodiment of the present invention includes:
step 101, identifying a single-finger operation of a user on a touch screen of the intelligent device, and calculating movement information of the single-finger operation on a target view.
In the embodiment of the invention, the intelligent device is a device which is provided with a computer operating system and a touch screen and allows a user to perform interactive operation on the touch screen.
The user can realize a plurality of operations on the touch screen, and the operations can be finished by a single finger or a plurality of fingers. The type of operation may also be varied, such as clicking, long pressing, dragging, etc. In an embodiment of the invention, the capture is performed for a single-finger operation of a user on a touch screen.
When the single-finger operation is recognized, a related method in the prior art may be adopted, for example, in an IOS operating system, when a touche event is triggered, it is determined whether the number of fingers is equal to 1, if so, the operation is the single-finger operation, and if greater than 1, the operation is not the single-finger operation.
And after the operation of the user on the touch screen is determined to be the single-finger operation, calculating the movement information of the single-finger operation. As known to those skilled in the art, when a user operates on a touch screen of a smart device, an operating system of the smart device can acquire coordinates of contact points between a finger of the user and the touch screen, and an offset corresponding to a single-finger operation on a target view can be calculated according to the coordinates of the contact points.
For example, in the IOS operating system, the offset in coordinates after the movement of a single finger is obtained by CGPoint translation [ pan translation in view: self ].
The target view is the view that is run on the smart device and acted upon by the user's single finger operation.
After the offset corresponding to the single-finger operation on the target view is obtained, the angle change value of the single-finger operation on the target view can be calculated according to the offset. For example, the transformed center point of the target view is calculated according to the offset of the single finger operation on the target view and the center point of the parent view, and the first arctangent value of the relative position of the target view in the height direction and the second arctangent value of the relative position of the target view in the width direction can be obtained according to the transformed center point coordinate of the target view and the transformed center point coordinate of the target view. And obtaining the angle change value corresponding to the single-finger operation according to the difference value of the first arc tangent value and the second arc tangent value. Although the angle change value is obtained based on the center point, in practical applications, the angle change value may be obtained from a front-rear position conversion relationship of other points.
It should be noted that the single-finger operation is widely applied to the intelligent device, and operations such as writing and page turning all involve the single-finger operation. Therefore, a certain trigger condition, such as a specific function key on a screen, is set before the gesture implementation method provided by the embodiment of the present invention is executed, and the gesture implementation method provided by the embodiment of the present invention is executed only if the trigger condition is satisfied. In other embodiments of the present invention, a process of setting the trigger condition will be described.
And 102, identifying a gesture state corresponding to the single-finger operation.
In the embodiment of the present invention, the gesture state is used to describe whether the gesture is completed at the current time, and specifically, the gesture state includes: the gesture continues to change and the gesture has ended.
The gesture state corresponding to the single-finger operation can be recognized by an operating system of the intelligent device. For example, in an IOS operating system, the IOS operating system may generate pan. Therefore, as long as pan.state generated by the IOS operating system is captured and specific values thereof are obtained, the gesture state corresponding to the single-finger operation can be recognized.
For example, when pan is set to uigesture recogniter statechanged, the gesture is in a continuously changing state (i.e., the gesture is ongoing and has not ended), and when pan is set to uigesture recogniter stateend, the gesture is in a state that has ended.
The gesture state helps the smart device recognize a specific type of single-finger operation. For example, when pan is a uigesture recogniturestatechanged, it is proved that the single-finger operation has not ended at the current time, and then the type of the single-finger operation may be a rotation operation or a zoom operation; when pan is called uigesture recognitionarstateend, it is proved that the single-finger operation has ended at the present time, and the type of the single-finger operation may be a screen capture operation or a sharing operation.
Step 103, determining a first gesture corresponding to the single-finger operation according to the movement information of the single-finger operation on the target view and the gesture state corresponding to the single-finger operation, and executing the operation corresponding to the first gesture.
In the embodiment of the invention, a single finger is required to complete relatively complex operations. Because the single-finger operation is widely used in practical application, and the gestures of various types are difficult to distinguish only by depending on the movement information of the single-finger operation on the target view, in the embodiment of the invention, the movement information of the single-finger operation on the target view is combined with the gesture state corresponding to the single-finger operation, so that the gestures of various types are realized, and further, the relatively complex operations corresponding to the gestures of various types are completed.
In the embodiment of the present invention, the type of the first gesture includes one or a combination of the following: zooming operation, rotating operation, screen capturing operation and sharing operation. For example, the zoom operation can be combined with the rotate operation, i.e., the zoom operation and the rotate operation are done simultaneously in one single finger operation. These types of operations will be further described in other embodiments of the present invention.
According to the gesture implementation method on the intelligent device, provided by the embodiment of the invention, through the movement information of the single-finger operation and the gesture state corresponding to the single-finger operation, a single finger can complete relatively complex operations, the user operation is simplified, and the view of the user is not blocked.
Based on any one of the above embodiments, in the embodiment of the present invention, the step 103 further includes:
determining the gesture corresponding to the single-finger operation as a zooming operation according to the movement information of the single-finger operation on the target view and the gesture state corresponding to the single-finger operation;
determining a scaling factor according to the movement information of the single finger operation on the target view;
and scaling the target view according to the scaling coefficient.
In the embodiment of the present invention, the gesture state corresponding to the single-finger operation is that the gesture is in a continuously changing state (i.e., pan is uigesture recogniturestatechanged), and the movement information of the single-finger operation on the target view includes an offset, and if the angle change value is smaller than a preset first angle threshold, the gesture corresponding to the single-finger operation is a zoom operation.
The zoom operation is a zoom operation on the target view. When zooming is carried out, a zooming coefficient is calculated according to the offset of single-finger operation on the target view, after the zooming coefficient is obtained, characters in the target view can be zoomed according to the zooming coefficient, and meanwhile, the zooming coefficient is transmitted to a parent view of the target view, so that the structure of the parent view is changed, and the purpose of zooming the target view is achieved. The parent view is a previous view of the target view, namely a view carrying the target view.
For example, in the IOS operating system, the offset of the single-finger operation on the target view is obtained in real time, then the component transformation.x of the offset on the X coordinate is obtained, the ratio transformation.x/10 between the component transformation.x of the offset on the X coordinate and a custom value (for example, the custom value is set to 10) is obtained, and the ratio is used as the scaling factor.
After the scaling coefficient is obtained, the scaling coefficient is adopted to scale the pointSize for setting the system font size in the font, so that the vector change of the characters in the target view is realized; the zoom factor is also transmitted to the parent view of the target view, and the structure (frame) of the parent view is changed, so that the purpose of zooming the target view is achieved.
The gesture implementation method on the intelligent device provided by the embodiment of the invention realizes that a single finger finishes zooming operation, simplifies user operation, and does not cause view shielding on the user.
Based on any one of the above embodiments, in the embodiment of the present invention, the step 103 further includes:
determining that the gesture corresponding to the single-finger operation is a rotation operation according to the movement information of the single-finger operation on the target view and the gesture state corresponding to the single-finger operation;
obtaining the angle of the target view to be rotated according to the angle change value of the single-finger operation;
and realizing the rotation of the target view according to the rotation angle.
In the embodiment of the present invention, the gesture state corresponding to the single-finger operation is that the gesture is in a continuously changing state (i.e., pan is uigesture recogniturestatechanged), and the movement information of the single-finger operation on the target view includes an offset, and the angle change value is greater than or equal to a preset first angle threshold, then the gesture corresponding to the single-finger operation is a rotation operation.
The following describes the whole process of rotating the target view by taking the IOS operating system as an example to calculate the angle change value of the single finger operation and obtain the rotation angle according to the angle change value.
In the IOS operating system, calculating the coordinates of the center point after the target view is rotated requires adding a relative offset to the x value and y value of the center point of the parent view (i.e. pan. view) of the target view responding to the gesture, respectively, to obtain:
CGPoint newCenter=CGPointMake(pan.view.center.x+translation.x,pan.view.center.y+translation.y)。
and setting an anchor point, wherein in the embodiment of the invention, the anchor point is defined as a central point before rotation of the target view, so that the anchor point can be set in the following way: center is self.
After the coordinates of the center point after rotation and the coordinates of the anchor point are obtained, the arctangent value is obtained through the height and the width of the relative positions of the center point after rotation and the anchor point:
CGFloat angle1=atan((newCenter.y-anthorPoint.y)/(newCenter.x-anthorPoint.x));
CGFloat angle2=atan((pan.view.center.y-anthorPoint.y)/(pan.view.center.x-anthorPoint.x))。
the difference value is used to find the current rotation angle, i.e., CGFloat angle1-angle2, and then set the current rotation self. Wherein CGAffineTransformRotate () is used to set view rotation; transform is used to represent the rotation matrix of the last rotation operation.
To ensure that the next touch needs to start from the current state, a variable is needed to record the overall rotation value, e.g., self.
The gesture implementation method on the intelligent device provided by the embodiment of the invention realizes that a single finger completes the rotation operation, simplifies the user operation, and does not cause the view shielding to the user.
Based on any one of the above embodiments, in the embodiment of the present invention, the step 103 further includes:
and transmitting the rotation angle to a parent view of the target view, and realizing the rotation of the parent view according to the rotation angle.
In the embodiment of the present invention, the target view has a parent view and has a positional association relationship with the parent view, so after the target view is rotated, the parent view needs to be rotated similarly. Therefore, in the embodiment of the present invention, the calculated rotation angle needs to be transmitted to the parent view, so as to implement the rotation of the parent view.
The gesture implementation method on the intelligent device provided by the embodiment of the invention realizes that a single finger completes the rotation operation, simplifies the user operation, and does not cause the view shielding to the user.
Based on any one of the above embodiments, in the embodiment of the present invention, the step 103 further includes:
determining that the gesture corresponding to the single-finger operation is screen capturing operation according to the movement information of the single-finger operation on the target view and the gesture state corresponding to the single-finger operation;
and executing screen capture operation.
In this embodiment of the present invention, if the single-finger operation that triggers the screen capture operation is circling, the determining that the gesture corresponding to the single-finger operation is the screen capture operation according to the movement information of the single-finger operation on the target view and the gesture state corresponding to the single-finger operation further includes:
when the gesture state corresponding to the single-finger operation is that the gesture is finished (i.e., pan. state is uigesture recognitionarstateend), and the angle change value included in the movement information of the single-finger operation on the target view is greater than or equal to a preset screen capture trigger angle threshold value, determining that the gesture corresponding to the single-finger operation is the screen capture operation. Specifically, the gesture corresponding to the single-finger operation is circling and screen capturing.
When the screen capturing operation is carried out, related function functions in an operating system of the intelligent equipment can be called to realize the screen capturing.
For example, in an IOS operating system, when a rotation angle corresponding to a single-finger rotation operation exceeds 360 degrees (a specific trigger angle value can be set according to user habits), a uigraphics begingimagecontext within operations for screenshot is triggered to operate a current context, and then a parent view of a pan gesture is used to call renderInContext for rendering a view layer into the currently drawn context (i.e. screenshot), so that a screenshot of the current control (for a single-level view, the target view is the current control; if the target view has a child view (another view is carried on the target view) or a parent view (one view carries the target view) or a child view is also carried on the child view of the target view, etc., the whole view level with a bearer relationship is the current control) can be generated, and if the parent view is the whole page, the whole screen is intercepted.
In the embodiment of the present invention, the single-finger operation for triggering the screen capture operation is a circle, and in other embodiments of the present invention, the single-finger operation for triggering the screen capture operation may be other types of operations. Such as drawing a nine-palace line, drawing a triangle, drawing a pair of hooks, drawing a fork, etc. As long as the operating system of the intelligent device can realize the specific screen capturing operation triggering action, the screen capturing operation can be triggered, and the purpose of triggering the screen capturing operation by a single finger is achieved.
The gesture implementation method on the intelligent device provided by the embodiment of the invention can complete screen capture operation by a single finger, simplifies user operation and does not cause view shielding on the user.
Based on any one of the above embodiments, in the embodiment of the present invention, the step 103 further includes:
when the gesture state corresponding to the single-finger operation is that the gesture is finished (i.e., pan. state is uigesture recognitionarstateend), and the offset included in the movement information of the single-finger operation on the target view is greater than or equal to the sharing trigger threshold, determining that the gesture corresponding to the single-finger operation is a sharing operation;
executing a sharing operation, specifically including: and calling a UIGraphic BeginImageContextWithOptions function for the operation context and a renderInContext function for screenshot to realize screenshot operation, and then sharing the result of the screenshot operation.
In a specific embodiment, if the single-finger operation triggering the sharing operation is a slide-up operation, the step of setting the offset included in the moving information of the single-finger operation on the target view to be greater than or equal to the sharing trigger threshold includes:
the component of the offset on the target view in the vertical direction according to the single-finger operation is greater than or equal to the sliding share trigger threshold.
The component of the offset of the single finger operation on the target view in the vertical direction can be obtained by the prior art. After the component is obtained, the component is compared with the upglide sharing trigger threshold, and if the component is greater than or equal to the upglide sharing trigger threshold and the gesture state corresponding to the single-finger operation is that the gesture is finished, the gesture corresponding to the single-finger operation can be determined to be the sharing operation.
When the sharing operation is executed, firstly, the screenshot operation needs to be triggered, and then the result of the screenshot operation is shared. In the previous embodiment of the present invention, the detailed implementation of the screenshot operation has been described in more detail, and therefore, the detailed implementation is not repeated here. And after the result of the screenshot operation is obtained, calling a corresponding function module in the intelligent equipment operating system to realize sharing.
For example, in an IOS operating system, a one-touch sharing may be implemented by setting a single-finger up-slide operation. Assuming that the upglide sharing trigger threshold is 200px, when pan2 (after sliding), view. And after the current whole screen is captured, displaying the sharing bullet frame.
The gesture implementation method on the intelligent device provided by the embodiment of the invention can finish sharing operation by a single finger, simplifies user operation and does not cause view shielding on the user.
Based on any one of the above embodiments, in an embodiment of the present invention, before step 101, the method further includes:
and setting a first gesture supporting single-finger operation for the control to which the target view belongs.
The zoom operation, the rotation operation, the screen capture operation or the sharing operation performed by a single finger as described in the foregoing embodiment of the present invention is not supported in the operating system of the existing smart device. Therefore, before step 101, a first gesture supporting a single-finger operation needs to be preset, and the first gesture may be a zooming operation, a rotating operation, a screen capturing operation or a sharing operation completed by the aforementioned single finger.
For example, in the IOS operating system, when the control to which the target view belongs satisfies usernectionenabled YES (i.e. YES in response to the determination result of the user event), a first gesture is added to the control to which the target view belongs, the first gesture is a uipangesturrecognizer gesture for representing a drag operation, and the maximum support number maximum number of gestures of the first gesture is set to 1.
According to the gesture implementation method on the intelligent device, the first gesture supporting single-finger operation is set for the control to which the target view belongs, so that the control to which the target view belongs can support the first gesture implemented in a single-finger operation mode, and a foundation is laid for simplifying user operation.
Fig. 2 is a schematic diagram of a gesture implementation apparatus on an intelligent device according to an embodiment of the present invention, and as shown in fig. 2, the gesture implementation apparatus on the intelligent device according to the embodiment of the present invention includes:
the single-finger operation identification module 201 is used for identifying a single-finger operation of a user on a touch screen of the intelligent device and calculating the movement information of the single-finger operation on a target view; the mobile information comprises offset and/or angle change values;
a gesture state recognition module 202, configured to recognize a gesture state corresponding to the single-finger operation; wherein the gesture state comprises a gesture continuous change or a gesture has ended;
the gesture operation executing module 203 is configured to determine a first gesture corresponding to the single-finger operation according to the movement information of the single-finger operation on the target view and the gesture state corresponding to the single-finger operation, and execute an operation corresponding to the first gesture.
According to the gesture implementation device on the intelligent equipment, provided by the embodiment of the invention, through the movement information of the single-finger operation and the gesture state corresponding to the single-finger operation, a single finger can complete relatively complex operations, the user operation is simplified, and the view of the user is not blocked.
Fig. 3 illustrates a physical structure diagram of an electronic device, which may include, as shown in fig. 3: a processor (processor)310, a communication Interface (communication Interface)320, a memory (memory)330 and a communication bus 340, wherein the processor 310, the communication Interface 320 and the memory 330 communicate with each other via the communication bus 340. The processor 310 may call logic instructions in the memory 330 to perform the following method: identifying single-finger operation of a user on a touch screen of the intelligent equipment, and calculating movement information of the single-finger operation on a target view; recognizing a gesture state corresponding to the single-finger operation; and determining a first gesture corresponding to the single-finger operation according to the movement information of the single-finger operation on the target view and the gesture state corresponding to the single-finger operation, and executing the operation corresponding to the first gesture.
It should be noted that, when being implemented specifically, the electronic device in this embodiment may be a server, a PC, or other devices, as long as the structure includes the processor 310, the communication interface 320, the memory 330, and the communication bus 340 shown in fig. 3, where the processor 310, the communication interface 320, and the memory 330 complete mutual communication through the communication bus 340, and the processor 310 may call the logic instruction in the memory 330 to execute the above method. The embodiment does not limit the specific implementation form of the electronic device.
In addition, the logic instructions in the memory 330 may be implemented in the form of software functional units and stored in a computer readable storage medium when the software functional units are sold or used as independent products. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
Further, embodiments of the present invention disclose a computer program product comprising a computer program stored on a non-transitory computer-readable storage medium, the computer program comprising program instructions, which when executed by a computer, the computer is capable of performing the methods provided by the above-mentioned method embodiments, for example, comprising: identifying single-finger operation of a user on a touch screen of the intelligent equipment, and calculating movement information of the single-finger operation on a target view; recognizing a gesture state corresponding to the single-finger operation; and determining a first gesture corresponding to the single-finger operation according to the movement information of the single-finger operation on the target view and the gesture state corresponding to the single-finger operation, and executing the operation corresponding to the first gesture.
In another aspect, an embodiment of the present invention further provides a non-transitory computer-readable storage medium, on which a computer program is stored, where the computer program is implemented by a processor to perform the method provided by the foregoing embodiments, for example, including: identifying single-finger operation of a user on a touch screen of the intelligent equipment, and calculating movement information of the single-finger operation on a target view; recognizing a gesture state corresponding to the single-finger operation; and determining a first gesture corresponding to the single-finger operation according to the movement information of the single-finger operation on the target view and the gesture state corresponding to the single-finger operation, and executing the operation corresponding to the first gesture.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware. With this understanding in mind, the above-described technical solutions may be embodied in the form of a software product, which can be stored in a computer-readable storage medium such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in the embodiments or some parts of the embodiments.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (10)

1. A gesture implementation method on a smart device is characterized by comprising the following steps:
identifying single-finger operation of a user on a touch screen of the intelligent equipment, and calculating movement information of the single-finger operation on a target view; the mobile information comprises offset and/or angle change values;
recognizing a gesture state corresponding to the single-finger operation; wherein the gesture state comprises a gesture continuous change or a gesture has ended;
and determining a first gesture corresponding to the single-finger operation according to the movement information of the single-finger operation on the target view and the gesture state corresponding to the single-finger operation, and executing the operation corresponding to the first gesture.
2. The method of claim 1, wherein the calculating the movement information of the single-finger operation on the target view comprises:
calculating the offset of the single-finger operation on the target view according to the coordinates of the contact point of the finger and the touch screen of the intelligent device;
and/or obtaining the position of the first point on the target view after the single-finger operation according to the offset of the single-finger operation on the target view, and obtaining the angle change value according to the position of the first point on the target view before the single-finger operation and the position after the single-finger operation.
3. The method according to claim 1, wherein the determining a first gesture corresponding to the single-finger operation according to the movement information of the single-finger operation on the target view and the gesture state corresponding to the single-finger operation, and the executing the operation corresponding to the first gesture further includes:
when the state is UIGesture RecognizationStateChanged, the gesture state corresponding to the single-finger operation is gesture continuous change, and the angle change value contained in the movement information of the single-finger operation on the target view is smaller than a preset first angle threshold value, determining that the first gesture corresponding to the single-finger operation is zooming operation;
determining a scaling coefficient according to the offset of the single-finger operation on the target view;
and according to the scaling coefficient, calling a pointSize function for setting the font size in the target view and a frame function for describing a structure in the parent view of the target view to realize the scaling of the target view.
4. The method according to claim 1, wherein the determining a first gesture corresponding to the single-finger operation according to the movement information of the single-finger operation on the target view and the gesture state corresponding to the single-finger operation, and the executing the operation corresponding to the first gesture further includes:
when the state is UIGesture RecognizationStateChanged, the gesture state corresponding to the single-finger operation is gesture continuous change, and the angle change value contained in the movement information of the single-finger operation on the target view is greater than or equal to a preset first angle threshold value, determining that the first gesture corresponding to the single-finger operation is rotation operation;
obtaining the rotation angle of the target view according to the angle change value of the single finger operation;
and calling a CGAffiniTransformRotate function to rotate the target view according to the rotation angle.
5. The method according to claim 1, wherein the determining a first gesture corresponding to the single-finger operation according to the movement information of the single-finger operation on the target view and the gesture state corresponding to the single-finger operation, and the executing the operation corresponding to the first gesture further includes:
when the state is UIGesture RecognizerStateEnded, the gesture state corresponding to the single-finger operation is that the gesture is finished, and the angle change value contained in the movement information of the single-finger operation on the target view is greater than or equal to a preset screen capture trigger angle threshold value, determining that the first gesture corresponding to the single-finger operation is screen capture operation;
and calling a UIGraphic BeginImageContextWithOptions function for operating the context and a renderInContext function for screenshot to realize screenshot operation.
6. The method according to claim 1, wherein the determining a first gesture corresponding to the single-finger operation according to the movement information of the single-finger operation on the target view and the gesture state corresponding to the single-finger operation, and the executing the operation corresponding to the first gesture further includes:
when the state is UIGesture RecognizerStateEnded, the gesture state corresponding to the single-finger operation is that the gesture is finished, and the offset contained in the movement information of the single-finger operation on the target view is larger than or equal to a sharing trigger threshold, determining that the first gesture corresponding to the single-finger operation is a sharing operation;
and calling a UIGraphic BeginImageContextWithOptions function for the operation context and a renderInContext function for screenshot to realize screenshot operation, and then sharing the result of the screenshot operation.
7. The gesture implementation method on the smart device according to any one of claims 1 to 6, wherein before the recognizing the single-finger operation of the user on the touch screen of the smart device, the method further comprises:
setting a first gesture on a control to which a target view belongs, wherein the first gesture is a UIPanGesture Recognizer gesture used for representing a drag operation, and setting the maximum number of supported gestures, maximumNumberOfTouches, of 1 for the first gesture.
8. A gesture implementation device on a smart device, comprising: the single-finger operation identification module is used for identifying the single-finger operation of a user on the touch screen of the intelligent equipment and calculating the movement information of the single-finger operation on the target view; the mobile information comprises offset and/or angle change values;
the gesture state recognition module is used for recognizing a gesture state corresponding to the single-finger operation; wherein the gesture state comprises a gesture continuous change or a gesture has ended;
and the gesture operation execution module is used for determining a first gesture corresponding to the single-finger operation according to the movement information of the single-finger operation on the target view and the gesture state corresponding to the single-finger operation, and executing the operation corresponding to the first gesture.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor when executing the program implements the steps of the gesture implementation method on the smart device according to any of claims 1 to 7.
10. A non-transitory computer readable storage medium having stored thereon a computer program, wherein the computer program, when executed by a processor, performs the steps of the gesture implementation method on a smart device according to any one of claims 1 to 7.
CN202010358529.6A 2020-04-29 2020-04-29 Gesture implementation method and device on intelligent device, electronic device and storage medium Pending CN111708472A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010358529.6A CN111708472A (en) 2020-04-29 2020-04-29 Gesture implementation method and device on intelligent device, electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010358529.6A CN111708472A (en) 2020-04-29 2020-04-29 Gesture implementation method and device on intelligent device, electronic device and storage medium

Publications (1)

Publication Number Publication Date
CN111708472A true CN111708472A (en) 2020-09-25

Family

ID=72537113

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010358529.6A Pending CN111708472A (en) 2020-04-29 2020-04-29 Gesture implementation method and device on intelligent device, electronic device and storage medium

Country Status (1)

Country Link
CN (1) CN111708472A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102375689A (en) * 2011-09-23 2012-03-14 上海量明科技发展有限公司 Method and system for operating touch screen
CN103186341A (en) * 2012-01-03 2013-07-03 深圳富泰宏精密工业有限公司 System and method for controlling zooming and rotation of file on touch screen
CN103699331A (en) * 2014-01-07 2014-04-02 东华大学 Gesture method for controlling screen zooming
CN103970434A (en) * 2013-01-28 2014-08-06 联想(北京)有限公司 Method and electronic equipment for responding operation
US20160210014A1 (en) * 2015-01-19 2016-07-21 National Cheng Kung University Method of operating interface of touchscreen of mobile device with single finger
CN106572238A (en) * 2016-10-12 2017-04-19 深圳众思科技有限公司 Method and device for capturing screen of terminal screen
CN108304116A (en) * 2018-02-27 2018-07-20 北京酷我科技有限公司 A kind of method of single finger touch-control interaction

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102375689A (en) * 2011-09-23 2012-03-14 上海量明科技发展有限公司 Method and system for operating touch screen
CN103186341A (en) * 2012-01-03 2013-07-03 深圳富泰宏精密工业有限公司 System and method for controlling zooming and rotation of file on touch screen
CN103970434A (en) * 2013-01-28 2014-08-06 联想(北京)有限公司 Method and electronic equipment for responding operation
CN103699331A (en) * 2014-01-07 2014-04-02 东华大学 Gesture method for controlling screen zooming
US20160210014A1 (en) * 2015-01-19 2016-07-21 National Cheng Kung University Method of operating interface of touchscreen of mobile device with single finger
CN106572238A (en) * 2016-10-12 2017-04-19 深圳众思科技有限公司 Method and device for capturing screen of terminal screen
CN108304116A (en) * 2018-02-27 2018-07-20 北京酷我科技有限公司 A kind of method of single finger touch-control interaction

Similar Documents

Publication Publication Date Title
EP3493100B1 (en) Two-dimensional code identification method and device, and mobile terminal
CN109242765B (en) Face image processing method and device and storage medium
US9880721B2 (en) Information processing device, non-transitory computer-readable recording medium storing an information processing program, and information processing method
CN111701226A (en) Control method, device and equipment for control in graphical user interface and storage medium
CN110941337A (en) Control method of avatar, terminal device and computer readable storage medium
CN103677597A (en) Terminal equipment and same-screen display method and system
CN105739879A (en) Virtual reality application method and terminal
US20230386041A1 (en) Control Method, Device, Equipment and Storage Medium for Interactive Reproduction of Target Object
CN111831204B (en) Device control method, device, storage medium and electronic device
CN113359995A (en) Man-machine interaction method, device, equipment and storage medium
CN113655929A (en) Interface display adaptation processing method and device and electronic equipment
CN114153344A (en) Group creation method and device, electronic equipment and storage medium
CN105278751A (en) Method and apparatus for implementing human-computer interaction, and protective case
CN113244611B (en) Virtual article processing method, device, equipment and storage medium
CN106648281B (en) Screenshot method and device
CN111859322A (en) Identity verification method and device and electronic equipment
CN109002293B (en) UI element display method and device, electronic equipment and storage medium
CN111708472A (en) Gesture implementation method and device on intelligent device, electronic device and storage medium
CN113457117B (en) Virtual unit selection method and device in game, storage medium and electronic equipment
CN112286430B (en) Image processing method, apparatus, device and medium
CN114610155A (en) Gesture control method and device, display terminal and storage medium
CN110087235B (en) Identity authentication method and device, and identity authentication method and device adjustment method and device
CN113485590A (en) Touch operation method and device
CN113791426A (en) Radar P display interface generation method and device, computer equipment and storage medium
CN110727345B (en) Method and system for realizing man-machine interaction through finger intersection movement

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200925