CN116048350B - Screen capturing method and electronic equipment - Google Patents

Screen capturing method and electronic equipment Download PDF

Info

Publication number
CN116048350B
CN116048350B CN202210806250.9A CN202210806250A CN116048350B CN 116048350 B CN116048350 B CN 116048350B CN 202210806250 A CN202210806250 A CN 202210806250A CN 116048350 B CN116048350 B CN 116048350B
Authority
CN
China
Prior art keywords
input event
target
motion sensor
finger joint
motion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210806250.9A
Other languages
Chinese (zh)
Other versions
CN116048350A (en
Inventor
付博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202210806250.9A priority Critical patent/CN116048350B/en
Publication of CN116048350A publication Critical patent/CN116048350A/en
Application granted granted Critical
Publication of CN116048350B publication Critical patent/CN116048350B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/0412Digitisers structurally integrated in a display
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/0416Control or interface arrangements specially adapted for digitisers
    • G06F3/0418Control or interface arrangements specially adapted for digitisers for error correction or compensation, e.g. based on parallax, calibration or alignment
    • G06F3/04186Touch location disambiguation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the application provides a screen capturing method and electronic equipment. Comprising the following steps: acquiring a first input event of a user to a first display screen, wherein the first display screen comprises a plurality of detection areas, and each detection area comprises at least one motion sensor; determining a target area in which the first input event is located from a plurality of detection areas in response to the first input event; acquiring motion data of a target sensor, wherein the target sensor at least comprises a motion sensor in a target area; inputting the capacitance data, the coordinate data and the motion data of the target sensor of the first input event into the finger joint screen capture confirmation model to obtain a first prediction result of the finger joint screen capture confirmation model; and judging whether the first input event is a finger joint screen capture according to the first prediction result. According to the technical scheme provided by the embodiment of the application, the electronic equipment can accurately judge that the user is performing the finger joint screen capture, so that the problems of low success rate and high false touch rate of the finger joint screen capture are avoided, and the use experience of the user is improved.

Description

Screen capturing method and electronic equipment
Technical Field
The application relates to the technical field of terminals, in particular to a screen capturing method and electronic equipment.
Background
The finger joint screen capturing is an auxiliary function of quick screen capturing, a user can realize full screen capturing through quick double-clicking of a single finger joint, can realize partial screen capturing through clicking and circling of the single finger joint on a display screen, and can realize full screen capturing through a user-defined gesture.
At present, in the process of performing finger joint screen capturing by using foldable electronic equipment, the foldable electronic equipment obtains relevant data for detecting the finger joint screen capturing only through a single motion sensor at different bending angles and different positions of a distance motion sensor, so that the success rate of the finger joint screen capturing is low and the false touch rate is high easily, and the use experience of a user is influenced.
Disclosure of Invention
The embodiment of the application provides a screen capturing method and electronic equipment, which are used for solving the problems of low success rate and high false touch rate of the finger joint screen capturing when a user captures the finger joint screen.
In a first aspect, an embodiment of the present application provides a screen capturing method, including: acquiring a first input event of a user to a first display screen, wherein the first display screen comprises a plurality of detection areas, and each detection area comprises at least one motion sensor; determining a target area in which the first input event is located from a plurality of detection areas in response to the first input event; acquiring motion data of a target sensor, wherein the target sensor at least comprises a motion sensor in a target area; inputting the capacitance data, the coordinate data and the motion data of the target sensor of the first input event into the finger joint screen capture confirmation model to obtain a first prediction result of the finger joint screen capture confirmation model; and judging whether the first input event is a finger joint screen capture according to the first prediction result.
According to the screen capturing method provided by the embodiment of the application, the electronic equipment can determine the target area where the first input event is located, and acquire the first prediction result through the capacitance data, the coordinate data and the motion data of the target sensor of the first input event. Therefore, the electronic equipment can accurately judge that the user is carrying out the finger joint screen capturing through the first prediction result, so that the problems of low success rate of the finger joint screen capturing and high false touch rate are avoided, and the user experience is improved.
In one implementation, in response to a first input event, before determining a target area in which the first input event is located from among the plurality of detection areas, further comprising: judging whether the first display screen is in a bright screen state or not; responding to a first input event if the first display screen is in a bright screen state; if the first display is not in the bright screen state, the first input event is not responded. By adopting the embodiment, the electronic equipment can respond to the first input event only when the first display screen is in the bright screen state, so that the user is prevented from touching by mistake.
In one implementation, the first display screen includes a first detection area and a second detection area, acquires motion data of a target sensor, the target sensor including at least a motion sensor in the target area, including: if the target area is a first detection area, acquiring motion data of a first motion sensor, wherein the first motion sensor is a motion sensor in the first detection area; and if the target area is a second detection area, acquiring motion data of a second motion sensor, wherein the second motion sensor is a motion sensor in the second detection area. By adopting the embodiment, when the first input event is located in different target areas, the electronic device can detect the first input event through different target sensors so as to acquire more accurate motion data to be input into the finger joint screen capture confirmation model, and the accurate judgment of the finger joint screen capture of the user is facilitated.
In one implementation, the first display screen includes a first detection region and a second detection region; acquiring motion data of a target sensor, the target sensor including at least a motion sensor in a target area, comprising: acquiring motion data of a first motion sensor and motion data of a second motion sensor; the first motion sensor is a motion sensor in a first detection area, and the second motion sensor is a motion sensor in a second detection area. By adopting the embodiment, the electronic equipment can detect the first input event through the two motion sensors so as to acquire more accurate motion data and input the more accurate motion data into the finger joint screen capture confirmation model, so that the user can accurately judge that the finger joint screen capture is performed.
In one implementation, the first display screen includes a plurality of decision regions, the plurality of decision regions being distributed sequentially along a direction away from the first motion sensor and the second motion sensor; each judgment area corresponds to a judgment threshold value; judging whether the first input event is a finger joint screen capture according to the first prediction result, including: if the first predicted result is a target result, determining a target judging area where the first input event is located from a plurality of judging areas, wherein the target result comprises that the first input event is a finger joint screen capture; and judging whether the first input event is a finger joint screen capture or not according to the motion data of at least one motion sensor and a target judgment threshold value, wherein the target judgment threshold value is a judgment threshold value of a target judgment area. According to the method and the device, the user can accurately distinguish whether the first input event is the finger joint screen capture or not by setting different judging thresholds in different judging areas because the motion data acquired by the motion sensor are different in the process of the finger joint screen capture in different judging areas through the same knocking force.
In one implementation, determining whether the first input event is a finger joint screen capture based on motion data of at least one motion sensor and a target determination threshold includes: if the target area is a first detection area, judging whether the motion data of the first motion sensor is larger than or equal to a target judgment threshold value; if the motion data of the first motion sensor is greater than or equal to the target determination threshold, determining that the first input event is a finger joint screen capture; if the motion data of the first motion sensor is less than the target determination threshold, it is determined that the first input event is not a finger joint screen capture. By adopting the embodiment, the electronic equipment can accurately distinguish whether the first input event of the first detection area is the finger joint screen capture through the first motion sensor.
In one implementation, determining whether the first input event is a finger joint screen capture based on motion data of at least one motion sensor and a target determination region includes: if the target area is a second detection area, judging whether the motion data of the second motion sensor is larger than or equal to a target judgment threshold value; if the motion data of the second motion sensor is greater than or equal to the target judgment threshold, determining that the first input event is a finger joint screen capture; if the motion data of the second motion sensor is less than the target determination threshold, it is determined that the first input event is not a finger joint screen capture. By adopting the embodiment, the electronic equipment can accurately distinguish whether the first input event of the second detection area is the finger joint screen capture through the second motion sensor.
In one implementation, the target determination threshold includes a first sub-threshold and a second sub-threshold, and determining whether the first input event is a finger joint screenshot based on the motion data of the at least one motion sensor and the target determination region includes: judging whether the motion data of the first motion sensor is larger than or equal to a second sub-threshold value or not, and judging whether the motion data of the second motion sensor is larger than or equal to a first sub-threshold value or not; if the motion data of the first motion sensor is greater than or equal to a first sub-threshold and the motion data of the second motion sensor is greater than or equal to a second sub-threshold, determining that the first input event is a finger joint screen capture; if the motion data of the first motion sensor is less than the first sub-threshold and/or the motion data of the second motion sensor is less than the second sub-threshold, it is determined that the first input event is not a finger joint screen capture. By adopting the embodiment, the electronic equipment can accurately distinguish whether the first input event is the finger joint screen capture or not through the first motion sensor and the second motion sensor.
In one implementation, the first display includes at least one inactive input area; judging that the first input event is a finger joint screen capture according to the first prediction result comprises: if the first predicted result is a target result, judging whether the first input event is positioned in an invalid input area, wherein the target result comprises that the first input event is a finger joint screen capture; if the first input event is located in the invalid input region, it is determined that the first input event is not a finger joint screen capture. By adopting the embodiment, the electronic device can avoid the user from touching the edge of the first display screen by judging whether the first input event is located in the invalid input area.
In one implementation, the method further comprises: acquiring a second input event of a user to a second display screen, wherein the second display screen corresponds to the first motion sensor; acquiring motion data of the first motion sensor in response to a second input event; inputting the capacitance data, the coordinate data and the motion data of the first motion sensor of the second input event to the finger joint screen capture confirmation model to obtain a second prediction result of the finger joint screen capture confirmation model; and judging whether the second input event is a finger joint screen capture according to the second prediction result. By adopting the embodiment, the electronic device can determine the target area where the second input event is located, and acquire the second prediction result through the capacitance value data, the coordinate data and the motion data of the first sensor of the second input event. Therefore, the electronic equipment can accurately judge that the user is performing the finger joint screen capture on the second display screen through the second prediction result, so that the problems of low success rate of the finger joint screen capture and high false touch rate are avoided, and the use experience of the user is improved.
In one implementation, before acquiring the motion data of the first motion sensor in response to the second input event, further comprising: judging whether the second display screen is in a bright screen state or not; responding to a second input event if the second display screen is in a bright screen state; if the second display is not in the bright screen state, the second input event is not responded. By adopting the embodiment, the electronic equipment can respond to the second input event only when the second display screen is in the bright screen state, so that the user is prevented from touching by mistake.
In one implementation, the second display screen includes a plurality of decision regions, the plurality of decision regions being sequentially distributed along a direction away from the first motion sensor; each judgment area corresponds to a judgment threshold value; judging whether the second input event is a finger joint screen capture according to the second prediction result, including: judging whether the motion data of the first motion sensor is larger than or equal to a target judgment threshold value, wherein the target judgment threshold value is a judgment threshold value of a target judgment area; if the motion data of the first motion sensor is greater than or equal to the target judgment threshold, determining that the input event is a finger joint screen capture; if the motion data of the first motion sensor is less than the target determination threshold, it is determined that the input event is not a finger joint screen capture. By adopting the embodiment, the electronic equipment can more accurately distinguish whether the second input event is the finger joint screen capture or not by setting different judging thresholds in different judging areas.
In one implementation, the second display includes at least one inactive input area; judging whether the second input event is a finger joint screen capture according to the second prediction result, including: if the second predicted result is a target result, judging whether the second input event is positioned in an invalid input area, wherein the target result comprises that the second input event is a finger joint screen capture; if the second input event is located in the invalid input region, it is determined that the second input event is not a finger joint screen capture. By adopting the embodiment, the electronic device can avoid the user from touching the edge of the second display screen by judging whether the second input event is in the invalid input area.
In one implementation, the motion sensor includes at least one or more of the following: acceleration sensor, gyroscope, geomagnetic sensor.
In one implementation, the shape of the decision region includes at least one or more of: circular, square, rectangular.
In a second aspect, an embodiment of the present application provides an electronic device, including: a processor and a memory; the memory stores program instructions that, when executed by the processor, cause the electronic device to perform the screen capture method of the above aspects and implementations thereof.
In a third aspect, embodiments of the present application also provide a chip system, where the chip system includes a processor and a memory, and the memory stores program instructions that, when executed by the processor, cause the chip system to perform the methods in the above aspects and their respective implementations. For example, information related to the above method is generated or processed.
In a fourth aspect, embodiments of the present application also provide a computer-readable storage medium having stored therein program instructions that, when run on a computer, cause the computer to perform the methods of the above aspects and implementations thereof.
In a fifth aspect, embodiments of the present application also provide a computer program product which, when run on a computer, causes the computer to perform the methods of the above aspects and their respective implementations.
Drawings
FIG. 1 is a schematic diagram of a foldable electronic device provided by an embodiment of the present application;
FIG. 2 is a schematic view of a user's finger joint screenshot provided by an embodiment of the present application;
FIG. 3 is a schematic diagram of a user finger joint screen capture failure scenario provided by an embodiment of the present application;
FIG. 4 is a schematic diagram of a user finger joint screenshot failure scenario provided by another embodiment of the present application;
FIG. 5 is a schematic diagram of a touch error scenario provided by an embodiment of the present application;
FIG. 6 is a schematic diagram of a touch error scenario according to another embodiment of the present application;
fig. 7 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present application;
fig. 8 is a schematic software structure of an electronic device according to an embodiment of the present application;
FIG. 9 is an exemplary flowchart of a screen capture method provided by an embodiment of the present application;
FIG. 10 is a schematic diagram of a first display screen detection area setting manner according to an embodiment of the present application;
FIG. 11 is a diagram of a connection architecture of an electronic device system according to an embodiment of the present application;
FIG. 12 is a schematic diagram of a method for acquiring coordinate data according to an embodiment of the present application;
FIG. 13 is a diagram of another electronic device system connection architecture provided by an embodiment of the present application;
FIG. 14 is an exemplary flowchart of a screen capture method provided by an embodiment of the present application;
FIG. 15 is a schematic diagram of a determination zone setting mode according to an embodiment of the present application;
FIG. 16 is a schematic diagram of another embodiment of a determination zone arrangement method;
FIG. 17 is a schematic diagram of another embodiment of a determination zone setting method;
FIG. 18 is a schematic diagram of a decision scenario of a target decision area according to an embodiment of the present application;
FIG. 19 is a schematic diagram showing the variation of gradient parameters of different position signals according to an embodiment of the present application;
FIG. 20 is a schematic diagram of an inactive input area provided by an embodiment of the present application;
FIG. 21 is another exemplary flowchart of a screen capture method provided by an embodiment of the present application;
fig. 22 is a schematic structural diagram of a screen capturing device according to an embodiment of the present application.
Detailed Description
The technical solutions of the embodiments of the present application will be clearly described below with reference to the drawings in the embodiments of the present application.
In the description of the present application, "/" means "or" unless otherwise indicated, for example, A/B may mean A or B. "and/or" herein is merely an association relationship describing an association object, and means that three relationships may exist, for example, a and/or B may mean: a exists alone, A and B exist together, and B exists alone. Furthermore, "at least one" means one or more, and "a plurality" means two or more. The terms "first," "second," and the like do not limit the number and order of execution, and the terms "first," "second," and the like do not necessarily differ.
In the present application, the words "exemplary" or "such as" are used to mean serving as an example, instance, or illustration. Any embodiment or design described herein as "exemplary" or "for example" should not be construed as preferred or advantageous over other embodiments or designs. Rather, the use of words such as "exemplary" or "such as" is intended to present related concepts in a concrete fashion.
The application scenario of the embodiment of the present application is first described with reference to the accompanying drawings.
Fig. 1 is a schematic diagram of a foldable electronic device according to an embodiment of the present application. A foldable electronic device is an electronic device whose display screen can achieve 360 degrees of bending, and generally includes: the foldable electronic device may be configured as an inward folding type foldable electronic device as shown in fig. 1A, an outward folding type foldable electronic device as shown in fig. 1B, and an upward and downward folding type electronic device as shown in fig. 1C based on the above configuration, and the first body 101, the second body 102 rotatably connected to the first body 101 through a rotation shaft, and the display screen 103 provided on one side of the first body 101 and the second body 102. Wherein, the foldable electronic device can make the folded display screen 103 face to one side of the user through the relative rotation of the first body 101 and the second body 102; the foldable electronic device can make the folded display screen 103 back to the user side by rotating the first body 101 and the second body 102 relatively. In order to facilitate detection of the change of the angle between the first body 101 and the second body 102 during the folding-unfolding state of the foldable electronic device, a part of the foldable electronic device is generally provided with a main motion sensor 1011 on the first body 101 and a sub motion sensor 1021 on the second body 102 to obtain the current change information of the angle of the foldable electronic device through the two motion sensors.
Fig. 2 is a schematic view of a finger joint screen capturing scene of a user according to an embodiment of the present application. The finger joint screen capturing is an auxiliary function of quick screen capturing, and a user can realize full screen capturing through quick double-clicking of the display screen 103 by a single finger joint as shown in fig. 2A, can realize partial screen capturing through clicking and circling of the display screen 103 by a single finger joint as shown in fig. 2B, and can also realize full screen capturing through sliding of a custom gesture on the display screen 103 as shown in fig. 2C. In the process that a user taps the display screen 103 through the finger joints, when the common non-foldable electronic device detects the tap action of the user on the display screen 103, whether the current tap action is the finger joint screen or not can be judged through the contact area of the tap action on the display screen 103 and the motion data generated by contact, and whether the current tap action is the finger joint screen or not can also be judged through the number of touch points of the tap action on the display screen 103 and the motion data generated by contact. Currently, the foldable electronic device uses the existing finger joint screen capturing manner, only obtains the motion data on the whole display screen 103 through the main motion sensor 1011 of the first body 101 to perform the judgment of the finger joint screen capturing, and does not apply the auxiliary motion sensor 1021 of the second body 102 to the finger joint screen capturing scene.
Since the motion sensor of the common non-foldable electronic device is usually arranged at the position of the longitudinal center axis of the body of the common non-foldable electronic device, the motion sensor can uniformly detect the motion data at two sides of the longitudinal center axis. The foldable electronic device only detects the motion data of the entire display screen 103 through the main motion sensor 1011 of the first body 101, so the position of the main motion sensor 1011 and the bending angle of the foldable electronic device all affect the detection accuracy of the main motion sensor 1011, which results in inaccurate mode of judging whether the user is performing the finger joint screen capture only through the motion data of the main motion sensor 1011 in combination with other factors, and the foldable electronic device has low success rate of performing the finger joint screen capture and high false touch rate.
Fig. 3 is a schematic diagram of a user finger joint screen capturing failure scene provided by an embodiment of the application. Because the foldable electronic device has multiple bending angles, as shown in fig. 3A, when the included angle between the display screen of the first body 101 and the display screen of the second body 102 is 180 degrees, the user taps on the display screen of the second body 102 with a tap force to perform the finger joint screen capturing, and at this time, the finger joint screen capturing can be successfully performed, while as shown in fig. 3B, when the included angle between the display screen of the first body 101 and the display screen of the second body 102 is 120 degrees, because the force points of the user at the same tap position are different, and the motion data acquired when the main motion sensor 1011 of the first body 101 corresponds to different bending angles at the same position on the display screen 103 are different, the user taps on the display screen of the second body 102 with the same tap force to perform the finger joint screen capturing, and at this time, the situation of the finger joint screen capturing failure may occur.
Fig. 4 is a schematic diagram of a user finger joint screen capturing failure scenario according to another embodiment of the present application. The motion data acquired by the foldable electronic device relates to the tapping position of the user, for example, as shown in fig. 4A, the user taps at the point a of the display screen 103 with a tap force and an acceleration to perform a finger joint screen capturing, where the acceleration acquired by the main motion sensor 1011 of the first body 101 is 10m/s 2 The method comprises the steps of carrying out a first treatment on the surface of the As shown in fig. 4B, the user taps at the point B of the display 103 with the same tap force and the same acceleration to perform the finger joint screen capture, and the point B is far from the main motion sensor 1011, so that the value of the acceleration acquired by the main motion sensor 1011 may be 8m/s 2 The method comprises the steps of carrying out a first treatment on the surface of the The value of the acceleration acquired by the main motion sensor 1011 is smaller than the actual value of the user's tap due to the influence of the distance, so that the foldable electronic device may fail to capture a screen of the finger jointsAnd (3) the situation.
Fig. 5 is a schematic diagram of a user touch error scenario provided in an embodiment of the present application. As shown in fig. 5, when the foldable electronic device obtains a contact area through the display screen 103 and obtains motion data generated by contact through the main motion sensor 1011 of the first body 101 to determine whether the current tapping motion is a finger joint screen capturing mode, there is a case of a user's false touch. For example, when the user does not have a screen capturing intention and is in a web page browsing scene, the contact area generated by clicking the finger web with the display screen 103 is similar to the contact area generated by clicking the finger joints, and the acceleration generated by clicking the finger web with the finger joints obtained by the main motion sensor 1011 is similar to the acceleration generated by clicking the finger joints, so that the foldable electronic device may misjudge the clicking operation of the finger web as the finger joint screen capturing operation to perform the finger joint screen capturing.
Fig. 6 is a schematic diagram of a user touch error scenario according to another embodiment of the present application. As shown in fig. 6, the edges of the foldable electronic device often display virtual keys that close, return, etc. Because the current finger joint screen capturing mode cannot accurately judge whether the current clicking action of the user is the finger joint screen capturing, when the user clicks the virtual keys such as closing and returning, if the current clicking action is mistakenly recognized as the finger joint screen capturing by the foldable electronic equipment, the finger joint screen capturing is carried out without responding to the closing action or the returning action of the user, and the use experience of the user is affected.
Therefore, in the process of the finger joint screen capturing by the user, the electronic equipment has the problem that whether the user is carrying out the finger joint screen capturing cannot be accurately judged, so that the success rate of the finger joint screen capturing of the user is low, the false touch rate is high, and the use experience of the user is affected.
In order to solve the problems of the related art, the embodiment of the application provides a screen capturing method. The screen capturing method provided by the embodiment of the application can be applied to electronic equipment. Electronic devices include, but are not limited to, foldable electronic devices. The electronic device includes, but is not limited to, a mobile phone, a tablet computer, a personal computer, a workstation device, a large screen device (such as a smart screen, a smart television and the like), a wearable device (such as a smart bracelet, a smart watch) and a palm game machine, a household game machine, a virtual reality device, an augmented reality device, a mixed reality device and the like, a vehicle-mounted intelligent terminal and the like.
Fig. 7 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present application. As shown in fig. 7, the electronic device 100 may include a processor 110, a memory 120, a universal serial bus (universal serial bus, USB) interface 130, a charge management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, keys 190, a motor 191, a camera 192, a display 193, and a subscriber identity module (subscriber identification module, SIM) card interface 194, etc. The sensor module 180 may include a touch sensor 180A, a gyro sensor 180B, an air pressure sensor 180C, a geomagnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, and the like. Among them, the gyro sensor 180B, the air pressure sensor 180C, the geomagnetic sensor 180D, the acceleration sensor 180E, and the like can be used to detect a motion state of an electronic apparatus, and thus, may also be referred to as a motion sensor.
It should be understood that the illustrated structure of the embodiment of the present application does not constitute a specific limitation on the electronic device 100. In other embodiments of the application, electronic device 100 may include more or fewer components than shown, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The processor 110 may include one or more processing units, such as: the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), a controller, a video codec, a digital signal processor (digital signal processor, DSP), a baseband processor, and/or a neural network processor (neural-network processing unit, NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors.
Memory 120 may be used to store computer-executable program code that includes instructions. The memory 120 may include a stored program area and a stored data area. The storage program area may store an application program (such as a sound playing function, an image playing function, etc.) required for at least one function of the operating system, etc. The storage data area may store data created during use of the electronic device 100 (e.g., audio data, phonebook, etc.), and so on. In addition, the memory 120 may include a high-speed random access memory, and may also include a nonvolatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (universal flash storage, UFS), and the like. The processor 110 performs various functional applications and data processing of the electronic device 100 by executing instructions stored in the memory 120 and/or instructions stored in a memory provided in the processor.
The USB interface 130 is an interface conforming to the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, or the like. The USB interface 130 may be used to connect a charger to charge the electronic device 100, and may also be used to transfer data between the electronic device 100 and a peripheral device. And can also be used for connecting with a headset, and playing audio through the headset. The interface may also be used to connect other electronic devices, such as AR devices, etc.
It should be understood that the interfacing relationship between the modules illustrated in the embodiments of the present application is only illustrative, and is not meant to limit the structure of the electronic device 100. In other embodiments of the present application, the electronic device 100 may also employ different interfacing manners in the above embodiments, or a combination of multiple interfacing manners.
The charge management module 140 is configured to receive a charge input from a charger. The charger can be a wireless charger or a wired charger. In some wired charging embodiments, the charge management module 140 may receive a charging input of a wired charger through the USB interface 130. In some wireless charging embodiments, the charge management module 140 may receive wireless charging input through a wireless charging coil of the electronic device 100. The charging management module 140 may also supply power to the electronic device through the power management module 141 while charging the battery 142.
The power management module 141 is used for connecting the battery 142, and the charge management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charge management module 140 and provides power to the processor 110, the memory 120, the display 193, the camera 192, the wireless communication module 160, and the like. The power management module 141 may also be configured to monitor battery capacity, battery cycle number, battery health (leakage, impedance) and other parameters. In other embodiments, the power management module 141 may also be provided in the processor 110. In other embodiments, the power management module 141 and the charge management module 140 may be disposed in the same device.
The wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the electronic device 100 may be used to cover a single or multiple communication bands. Different antennas may also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed into a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 150 may provide a solution for wireless communication including 2G/3G/4G/5G, etc., applied to the electronic device 100. The mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (low noise amplifier, LNA), etc. The mobile communication module 150 may receive electromagnetic waves from the antenna 1, perform processes such as filtering, amplifying, and the like on the received electromagnetic waves, and transmit the processed electromagnetic waves to the modem processor for demodulation. The mobile communication module 150 can amplify the signal modulated by the modem processor, and convert the signal into electromagnetic waves through the antenna 1 to radiate. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the processor 110. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be provided in the same device as at least some of the modules of the processor 110.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating the low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then transmits the demodulated low frequency baseband signal to the baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then transferred to the application processor. The application processor outputs sound signals through an audio device (not limited to the speaker 170A, the receiver 170B, etc.), or displays images or videos through the display screen 193. In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be provided in the same device as the mobile communication module 150 or other functional module, independent of the processor 110.
The wireless communication module 160 may provide solutions for wireless communication including wireless local area network (wireless local area networks, WLAN) (e.g., wireless fidelity (wireless fidelity, wi-Fi) network), bluetooth (BT), global navigation satellite system (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), near field wireless communication technology (near field communication, NFC), infrared technology (IR), etc., as applied to the electronic device 100. The wireless communication module 160 may be one or more devices that integrate at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via the antenna 2, modulates the electromagnetic wave signals, filters the electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 160 may also receive a signal to be transmitted from the processor 110, frequency modulate it, amplify it, and convert it to electromagnetic waves for radiation via the antenna 2.
In some embodiments, antenna 1 and mobile communication module 150 of electronic device 100 are coupled, and antenna 2 and wireless communication module 160 are coupled, such that electronic device 100 may communicate with a network and other devices through wireless communication techniques. The wireless communication techniques may include the Global System for Mobile communications (global system for mobile communications, GSM), general packet radio service (general packet radio service, GPRS), code division multiple access (code division multiple access, CDMA), wideband code division multiple access (wideband code division multiple access, WCDMA), time division code division multiple access (time-division code division multiple access, TD-SCDMA), long term evolution (long term evolution, LTE), BT, GNSS, WLAN, NFC, FM, and/or IR techniques, among others. The GNSS may include a global satellite positioning system (global positioning system, GPS), a global navigation satellite system (global navigation satellite system, GLONASS), a beidou satellite navigation system (beidou navigation satellite system, BDS), a quasi zenith satellite system (quasi-zenith satellite system, QZSS) and/or a satellite based augmentation system (satellite based augmentation systems, SBAS).
The electronic device 100 implements display functions through a GPU, a display screen 193, an application processor, and the like. The GPU is a microprocessor for image processing, and is connected to the display 193 and an application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
The display 193 is used to display images, videos, and the like. The display 193 includes a display panel. The display panel may employ a liquid crystal display (liquid crystal display, LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (AMOLED) or an active-matrix organic light-emitting diode (matrix organic light emitting diode), a flexible light-emitting diode (flex), a mini, a Micro led, a Micro-OLED, a quantum dot light-emitting diode (quantum dot light emitting diodes, QLED), or the like. In some embodiments, electronic device 100 may include 1 or N display screens 193, N being a positive integer greater than 1.
The electronic device 100 may implement photographing functions through an ISP, a camera 192, a video codec, a GPU, a display screen 193, an application processor, and the like.
The ISP is used to process the data fed back by the camera 192. For example, when photographing, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electric signal, and the camera photosensitive element transmits the electric signal to the ISP for processing and is converted into an image visible to naked eyes. ISP can also optimize the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be located in the camera 192.
The camera 192 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image onto the photosensitive element. The photosensitive element converts the optical signal into an electrical signal, which is then transferred to the ISP to be converted into a digital image signal. The ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into an image signal in a standard RGB, RYYB, YUV, or the like format. In some embodiments, the electronic device 100 may include 1 or N cameras 192, N being a positive integer greater than 1.
The electronic device 100 may implement audio functions through an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, an application processor, and the like. Such as music playing, recording, etc.
The touch sensor 180A, also referred to as a "touch device". The touch sensor 180A may be disposed on the display 193, and the touch sensor 180A and the display 193 form a touch screen, which is also referred to as a "touch screen". The touch sensor 180A is used to detect a touch operation acting thereon or thereabout. The touch sensor may communicate the detected touch operation to the application processor to determine the touch event type. Visual output related to the touch operation may be provided through the display 193. In other embodiments, the touch sensor 180A may also be disposed on a surface of the electronic device 100 at a location different from the location of the display 193.
The gyro sensor 180B may be used to determine a motion gesture of the electronic device 100. In some embodiments, the angular velocity of electronic device 100 about three axes (i.e., x, y, and z axes) may be determined by gyro sensor 180B. The gyro sensor 180B may be used for photographing anti-shake. For example, when the shutter is pressed, the gyro sensor 180B detects the shake angle of the electronic device 100, calculates the distance to be compensated by the lens module according to the angle, and makes the lens counteract the shake of the electronic device 100 through the reverse motion, so as to realize anti-shake. The gyro sensor 180B may also be used for navigating, somatosensory game scenes.
The air pressure sensor 180C is used to measure air pressure. In some embodiments, electronic device 100 calculates altitude from barometric pressure values measured by barometric pressure sensor 180C, aiding in positioning and navigation.
The geomagnetic sensor 180D includes a hall sensor. The electronic device 100 may detect the opening and closing of the flip cover using the geomagnetic sensor 180D. In some embodiments, when the electronic device 100 is a flip machine, the electronic device 100 may detect the opening and closing of the flip according to the geomagnetic sensor 180D. And then according to the detected opening and closing state of the leather sheath or the opening and closing state of the flip, the characteristics of automatic unlocking of the flip and the like are set.
The acceleration sensor 180E may detect the magnitude of acceleration of the electronic device 100 in various directions (typically three axes). The magnitude and direction of gravity may be detected when the electronic device 100 is stationary. The electronic equipment gesture recognition method can also be used for recognizing the gesture of the electronic equipment, and is applied to horizontal and vertical screen switching, pedometers and other applications.
A distance sensor 180F for measuring a distance. The electronic device 100 may measure the distance by infrared or laser. In some embodiments, the electronic device 100 may range using the distance sensor 180F to achieve quick focus.
The proximity light sensor 180G may include, for example, a light emitting diode and a light detector, such as a photodiode. The light emitting diode may be an infrared light emitting diode. The electronic device 100 emits infrared light outward through the light emitting diode. The electronic device 100 detects infrared reflected light from nearby objects using a photodiode. When sufficient reflected light is detected, it may be determined that there is an object in the vicinity of the electronic device 100. When insufficient reflected light is detected, the electronic device 100 may determine that there is no object in the vicinity of the electronic device 100. The electronic device 100 can detect that the user holds the electronic device 100 close to the ear by using the proximity light sensor 180G, so as to automatically extinguish the screen for the purpose of saving power. The proximity light sensor 180G may also be used in holster mode, pocket mode to automatically unlock and lock the screen.
The fingerprint sensor 180H is used to collect a fingerprint. The electronic device 100 may utilize the collected fingerprint feature to unlock the fingerprint, access the application lock, photograph the fingerprint, answer the incoming call, etc.
The temperature sensor 180J is for detecting temperature. In some embodiments, the electronic device 100 performs a temperature processing strategy using the temperature detected by the temperature sensor 180J. For example, when the temperature reported by temperature sensor 180J exceeds a threshold, electronic device 100 performs a reduction in the performance of a processor located in the vicinity of temperature sensor 180J in order to reduce power consumption to implement thermal protection. In other embodiments, when the temperature is below another threshold, the electronic device 100 heats the battery 142 to avoid the low temperature causing the electronic device 100 to be abnormally shut down. In other embodiments, when the temperature is below a further threshold, the electronic device 100 performs boosting of the output voltage of the battery 142 to avoid abnormal shutdown caused by low temperatures.
The keys 190 include a power-on key, a volume key, etc. The keys 190 may be mechanical keys. Or may be a touch key. The electronic device 100 may receive key inputs, generating key signal inputs related to user settings and function controls of the electronic device 100.
The motor 191 may generate a vibration cue. The motor 191 may be used for incoming call vibration alerting as well as for touch vibration feedback. For example, touch operations acting on different applications (e.g., photographing, audio playing, etc.) may correspond to different vibration feedback effects. The motor 191 may also correspond to different vibration feedback effects by touch operations applied to different areas of the display screen 193. Different application scenarios (such as time reminding, receiving information, alarm clock, game, etc.) can also correspond to different vibration feedback effects. The touch vibration feedback effect may also support customization.
The SIM card interface 194 is used to connect to a SIM card. The SIM card may be inserted into the SIM card interface 194, or removed from the SIM card interface 194 to enable contact and separation with the electronic device 100. The electronic device 100 may support 1 or N SIM card interfaces, N being a positive integer greater than 1. The SIM card interface 194 may support a Nano SIM card, micro SIM card, etc. The same SIM card interface 194 may be used to insert multiple cards simultaneously. The types of the plurality of cards may be the same or different. The SIM card interface 194 may also be compatible with different types of SIM cards. The SIM card interface 194 may also be compatible with external memory cards. The electronic device 100 interacts with the network through the SIM card to realize functions such as communication and data communication. In some embodiments, the electronic device 100 employs esims, i.e.: an embedded SIM card. The eSIM card can be embedded in the electronic device 100 and cannot be separated from the electronic device 100.
The software system of the electronic device 100 may employ a layered architecture, an event driven architecture, a microkernel architecture, a microservice architecture, or a cloud architecture. In the embodiment of the application, taking an Android system with a layered architecture as an example, a software structure of the electronic device 100 is illustrated.
Fig. 8 is a software configuration block diagram of the electronic device 100 according to the embodiment of the present application.
The layered architecture divides the software into several layers, each with distinct roles and branches. The layers communicate with each other through a software interface. In some embodiments, the Android system is divided into four layers, from top to bottom, an application layer, an application framework layer, an Zhuoyun row (Android run) and system libraries, and a kernel layer, respectively.
The application layer may include a series of application packages.
As shown in fig. 8, the application package may include battery management, camera, gallery, calendar, talk, map, navigation, music, video, short message, etc. applications.
The application framework layer provides an application program interface (application programming interface, API) and programming framework for application programs of the application layer. The application framework layer includes a number of predefined functions.
As shown in FIG. 8, the application framework layer may include a window manager, an input manager InputManager, a sensor manager SensorManager, a phone manager, a resource manager, a notification manager, and so forth.
The input manager may be used to monitor input events of the user, such as click events, swipe events, etc., performed by the user's finger on the display screen 193 of the electronic device 100. By listening for input events, the electronic device 100 can determine whether the electronic device is being used.
The sensor manager is used to monitor data returned by various sensors in the electronic device, such as motion sensor data, proximity sensor data, temperature sensor data, and the like. Using the data returned by the various sensors, the electronic device can determine whether it is jittered, whether the display 193 is occluded, etc.
Android run time includes a core library and virtual machines. Android run time is responsible for scheduling and management of the Android system.
The core library consists of two parts: one part is a function which needs to be called by java language, and the other part is a core library of android.
The application layer and the application framework layer run in a virtual machine. The virtual machine executes java files of the application program layer and the application program framework layer as binary files. The virtual machine is used for executing the functions of object life cycle management, stack management, thread management, security and exception management, garbage collection and the like.
The system library may include a plurality of functional modules. For example: surface manager (surface manager), media Libraries (Media Libraries), three-dimensional graphics processing Libraries (e.g., openGL ES), 2D graphics engines (e.g., SGL), etc.
The surface manager is used to manage the display subsystem and provides a fusion of 2D and 3D layers for multiple applications.
Media libraries support a variety of commonly used audio, video format playback and recording, still image files, and the like. The media library may support a variety of audio and video encoding formats, such as MPEG4, h.264, MP3, AAC, AMR, JPG, PNG, etc.
The three-dimensional graphic processing library is used for realizing three-dimensional graphic drawing, image rendering, synthesis, layer processing and the like.
The 2D graphics engine is a drawing engine for 2D drawing.
The kernel layer is a layer between hardware and software. The inner core layer at least comprises a display driver, a camera driver, an audio driver and a sensor driver.
It should be understood that the illustrated structure of the embodiment of the present application does not constitute a specific limitation on the electronic device 100. In other embodiments of the application, electronic device 100 may include more or fewer components than shown, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The following describes exemplary steps of a screen capturing method according to an embodiment of the present application.
FIG. 9 is an exemplary flowchart of a screen capture method provided by an embodiment of the present application. As shown in fig. 9, the screen capturing method specifically may include the following steps S901 to S905:
in step S901, a first input event of a user to a first display screen 911 is acquired, the first display screen 911 including a plurality of detection areas, each detection area including at least one motion sensor.
The first display screen 911 may be a flexible touch screen laid on the first body 101 and the second body 102 of the electronic device, and a user may bend the first display screen 911 by rotating the first body 101 and the second body 102 relative to each other. When the user clicks/taps the first display screen 911 on the first display screen 911 through the tip of a finger, the abdomen of a finger, the joints of a finger, or through a stylus, the electronic device can detect a first input event occurring on the first display screen 911.
Fig. 10 is a schematic diagram of a first display screen setting detection area mode according to an embodiment of the present application. As shown in fig. 10, taking the folded-out electronic device as an example, the electronic device is provided with a first detection area 921 and a second detection area 922 on the first display screen 911, wherein the first detection area 921 is an area where the first display screen 911 covers the first body 101, and the second detection area 922 is an area where the first display screen 911 covers the second body 102.
Wherein the first detection area 921 is provided with a first motion sensor 931 and the second detection area 922 is provided with a second motion sensor 932. The first motion sensor 931 may be one or more of an acceleration sensor, a gyroscope, and a geomagnetic sensor, and the electronic device may further determine whether the first input event is a finger joint screen shot according to the motion data of the first motion sensor 931. The second motion sensor 932 may be one or more of an acceleration sensor, a gyroscope, and a geomagnetic sensor, and the electronic device may further determine whether the first input event is a finger joint screen capture according to the motion data of the second motion sensor 932.
After the electronic device detects the first Input event occurring on the first display screen 911, the first Input event is acquired via an Input subsystem.
Fig. 11 is a diagram of a connection structure of an electronic device system according to an embodiment of the present application. As shown in fig. 11, after the electronic device acquires the first input event, the first input event is input to the dynamic link library. A dynamic link library is a shared library of functions that provides a way for a process to call functions that do not belong to its executable code.
It will be appreciated that the user may be involved in a first input event in a variety of scenarios. The user has a first input event to the electronic device that the user intends to trigger in part of the scene, and the user is a first input event that occurs for false touch in part of the scene.
In one implementation, the electronic device determines the user's current willingness to use the electronic device based on the on-screen or off-screen status of the first display 911. If the first display 911 is in a bright screen state, indicating that the user is using the electronic device, the electronic device is responsive to a first input event. If the first display 911 is in an off-screen state, indicating that the user is not currently willing to use the electronic device, the electronic device does not respond to the first input event. The electronic device can avoid judging the first input event generated by the user's false touch as the finger joint screen capture by judging the current use intention of the user on the electronic device.
In particular, the electronic device may determine, through a Power Manager (Power Manager), whether the first display 911 is in a bright screen state, and send the current bright screen or off screen state of the first display 911 to the dynamic link library. When the first display screen 911 is in the bright screen state, the dynamic link library of the electronic device acquires the first input event input by the input subsystem, so as to further judge the first input event. When the first display screen 911 is in the off-screen state, the dynamic link library of the electronic device immediately releases the first input event entered by the input subsystem after acquiring the first input event, and determines that the first input event is not a finger joint screen capture.
In step S902, in response to the first input event, a target area in which the first input event is located is determined from the plurality of detection areas.
Fig. 12 is a schematic diagram of a mode of acquiring coordinate data according to an embodiment of the present application. As shown in fig. 12, the first display screen 911 may determine the volume data and the coordinate data of the first input event, and the electronic device may determine the target area where the first input event is located from among the plurality of detection areas through the volume data and the coordinate data of the first input event.
The first display screen 911 is typically a capacitive flexible touch screen, and four sides of the first display screen 911 are plated with elongated electrodes, thereby forming a low voltage ac electric field in the electrodes. When the user taps the first display screen 911 through the finger joints, a coupling capacitance is formed between the contact area of the finger joints and the first display screen 911 and the electrode due to the existence of the human body electric field, based on which the touch chip can acquire capacitance data and transmit the capacitance data to the driving layer.
It should be noted that, in the embodiment of the present application, after the touch chip obtains the capacitance data, the capacitance data is transmitted to the driving layer, the driving layer transmits the capacitance data to the Touch (TP) effect module, the touch effect module calculates the coordinate data according to the capacitance data, and then transmits the capacitance data and the coordinate data back to the driving layer, and the driving layer then transmits the capacitance data and the coordinate data to the input subsystem, so that the input subsystem provides the capacitance data and the coordinate data to the dynamic link library.
The capacity data may be, for example, 7×7 capacity matrix data.
For example, when the electronic device is provided with the first detection area 921 and the second detection area 922, the electronic device may determine that the target area where the first input event is located is the first detection area 921 through the capacitance data and the coordinate data of the first input event, or the electronic device may determine that the target area where the first input event is located is the second detection area 922 through the capacitance data and the coordinate data of the first input event. The electronic equipment can further acquire the motion data of the target sensor corresponding to the target area by determining the target area where the first input event is located.
In step S903, motion data of a target sensor is acquired, where the target sensor includes at least a motion sensor in the target area.
In one implementation, when the target area is the first detection area 921, which indicates that the user has a first input event in the first detection area 921, the motion data of the target sensor acquired by the electronic device may be the motion data of the first motion sensor 931. The motion data of the first motion sensor 931 may be, for example, a combination of one or more of data of an acceleration sensor, data of a gyroscope, data of a geomagnetic sensor, and the like.
For example, the electronic device may acquire data of a gyroscope or data of a geomagnetic sensor for determining whether the electronic device is subjected to an angle change due to a user knuckle tap.
For example, the electronic device may acquire data of the acceleration sensor for determining whether the electronic device is changed in position due to a user knuckle tap.
Here, the motion data of the first motion sensor 931 includes, but is not limited to, a data type shown in the embodiment of the present application, which is not limited thereto.
In one implementation, when the target area is the second detection area 922, indicating that the user has a first input event in the second detection area 922, the motion data of the target sensor acquired by the electronic device may be the motion data of the second motion sensor 932. The motion data of the second motion sensor 932 is obtained in a similar manner to that of the first motion sensor 931, and the embodiment of the present application will not be described herein.
Therefore, when the first input event occurs in different detection areas, the electronic equipment can acquire corresponding motion data through the motion sensors corresponding to the different detection areas, the motion data are more accurate, and the electronic equipment can more accurately judge whether the first input event is a finger joint screen capture or not through further processing of the acquired motion data.
In one implementation, when the target area is either the first detection area 921 or the second detection area 922, the motion data of the target sensor acquired by the electronic device includes the motion data of the first motion sensor 931 and the motion data of the second motion sensor 932. The manner of acquiring the motion data of the first motion sensor 931 and the manner of acquiring the motion data of the second motion sensor 932 are similar to those of the above-described embodiment, and the description thereof will be omitted.
Here, the motion data of the second motion sensor 932 includes, but is not limited to, the type of data shown in the embodiment of the present application, which is not limited thereto.
Therefore, when a first input event occurs in any detection area, the electronic equipment can acquire the motion data corresponding to the two motion sensors at the same time, the motion data corresponding to the two motion sensors can more accurately determine the actual motion data of the first input event, and the electronic equipment can more accurately judge whether the first input event is a finger joint screen capture or not by further processing the acquired motion data.
In step S904, the capacitance data, the coordinate data, and the motion data of the target sensor of the first input event are input to the finger joint screenshot confirmation model to obtain a first prediction result of the finger joint screenshot confirmation model.
In one implementation, when the first input event is located in the first detection area 921, the electronic device inputs the capacitance data, the coordinate data, and the motion data of the first motion sensor 931 of the first input event to the first finger joint screenshot confirmation model to obtain a first prediction result of the first finger joint screenshot confirmation model. Here, the first finger joint screen capturing confirmation model in this embodiment is a neural network model pre-trained by the capacitance data, the coordinate data, and the motion data of the first motion sensor 931.
In one implementation, when the first input event is located in the second detection region 922, the electronic device inputs the capacitance data, the coordinate data, and the motion data of the second motion sensor 932 of the first input event to the second finger joint screenshot confirmation model to obtain a first prediction result of the second finger joint screenshot confirmation model. Here, the second finger joint screen capturing confirmation model in the present embodiment is a neural network model pre-trained by the capacitance data, the coordinate data, and the motion data of the second motion sensor 932.
As further shown in fig. 11, in a specific implementation, the electronic device is provided with a first finger joint screen capture confirmation model and a second finger joint screen capture confirmation model, and the dynamic link library of the electronic device establishes a first conveying channel with the first finger joint screen capture confirmation model and establishes a second conveying channel with the second finger joint screen capture confirmation model. When the first input event is located in the first detection area 921, the dynamic link library opens the first transmission channel to transmit three parameters including the capacitance data, the coordinate data, and the motion data of the first motion sensor 931 to the first finger joint screen capture confirmation model. When the second input event is located in the second detection area 922, the dynamic link library opens the second transmission channel to transmit three parameters, namely, the capacitance data, the coordinate data and the motion data of the second motion sensor 932, to the second finger joint screen capture confirmation model.
It should be noted that, when the first input event is located in a different detection area, the acquired related data needs to be input into a different finger joint screen capture confirmation model, so that the electronic device can more accurately determine whether the first input event in the different detection area is a finger joint screen capture through the finger joint screen capture confirmation model constructed by three main parameters.
In one implementation, when the first input event is located in either the first detection area 921 or the second detection area 922, the electronic device inputs the capacitance data, the coordinate data, the motion data of the first motion sensor 931, and the motion data of the second motion sensor 932 of the first input event to the third finger joint screenshot confirmation model to obtain a first prediction result of the third finger joint screenshot confirmation model. Here, the third finger joint screen capture confirmation model in the present embodiment is a neural network model pre-trained by the capacitance data, the coordinate data, the motion data of the first motion sensor 931, and the motion data of the second motion sensor 932.
Fig. 13 is a diagram of another electronic device system connection structure according to an embodiment of the present application. As shown in fig. 13, in a specific implementation, the electronic device is provided with a third finger joint screen capture confirmation model, and the dynamic link library of the electronic device and the third finger joint screen capture confirmation model establish a third conveying channel. When the first input event is located in either the first detection area 921 or the second detection area 922, the dynamic link library opens the third transmission channel to transmit four parameters including the capacitance data, the coordinate data, the motion data of the first motion sensor 931, and the motion data of the second motion sensor 932 of the first input event to the third finger joint screen capture confirmation model.
It should be noted that, when the first input event is located in any detection area, the electronic device comprehensively determines the first input event through the two motion sensors, so that the electronic device can more accurately determine whether the first input event is a finger joint screen capture through a finger joint screen capture confirmation model constructed by four main parameters.
Step S905, determining whether the first input event is a finger joint screenshot according to the first prediction result.
In one implementation, the electronic device may determine whether the first input event is a finger joint screenshot based only on the first prediction result. If the first predicted result is that the first input event is a finger joint screen capture, the electronic device determines that the first input event is a finger joint screen capture. If the first prediction result is not a finger joint screen capture, the electronic device determines that the first input event is not a finger joint screen capture. In this way, the electronic device may determine whether the first input event is a finger joint screen capture according to only the finger joint screen capture confirmation model illustrated in embodiments of the present application.
Although the electronic device may determine whether the first input event is a finger joint screenshot more accurately through different finger joint screenshot confirmation models, in some scenarios, the electronic device needs to further improve the accuracy of the determination.
It should be noted that, since the actual value of the motion data is also affected by the distance and angle between the motion sensor and the first input event, the electronic device needs to further determine whether the first input event is a finger joint screenshot in combination with other information.
In one implementation, when the first predicted result is that the first input event is a finger joint screenshot, the electronic device further executes steps S9051-S9052 as shown in fig. 14;
in step S9051, a target determination region in which the first input event is located is determined from the plurality of determination regions.
The electronic device may set a plurality of decision areas on the first display screen 911, where the decision areas are sequentially distributed along a direction away from the first motion sensor 931 and the second motion sensor 932, and each decision area corresponds to a decision threshold.
Fig. 15 is a schematic diagram of a setting manner of a determination zone according to an embodiment of the present application. As shown in fig. 15, the electronic device sets a plurality of circular areas at equal intervals around the first motion sensor 931, and a determination region is formed between every two adjacent circular areas, and each determination region is in the shape of a circular ring. The electronic device sets a plurality of circular areas at equal intervals by taking the second motion sensor 932 as a center of a circle, and a judging area is formed between every two adjacent circular areas. By the arrangement mode of the judging area, when the target judging area where the first input event is located is determined through the coordinate position, the electronic equipment can enable the first input event with the same distance as the corresponding motion sensor to be located in the same judging area, and data consistency when the judging area is used for further judging is ensured.
Fig. 16 is a schematic diagram of another arrangement mode of the determination zone according to the embodiment of the present application. As shown in fig. 16, the electronic apparatus is provided with a plurality of square areas at equal intervals centering on the first motion sensor 931, one judgment area is formed between each adjacent two square areas, and each judgment area is in a square ring shape. The electronic device is provided with a plurality of square areas at equal intervals with the second motion sensor 932 as a center, and a judgment area is formed between every two adjacent square areas. Because the setting mode boundary of the judging area is clear, the electronic equipment can more accurately determine the target judging area where the first input event is located.
Fig. 17 is a schematic diagram of another arrangement mode of the determination zone according to the embodiment of the present application. As shown in fig. 17, the electronic apparatus laterally and uniformly divides a plurality of rectangular areas in the first detection area 921 and laterally and uniformly divides a plurality of rectangular areas in the second detection area 922. The setting mode of the judging area is simple and clear, so that the electronic equipment can more easily determine the target judging area where the first input event is located.
Here, the arrangement of the determination area includes, but is not limited to, the arrangement shown in the above embodiment, which is not limited to the embodiment of the present application.
In step S9052, it is determined whether the first input event is a finger joint screenshot according to the motion data of the at least one motion sensor and a target determination threshold, wherein the target determination threshold is a determination threshold of the target determination region.
Fig. 18 is a schematic diagram of a decision scenario of a target decision area according to an embodiment of the present application. As shown in fig. 18, the electronic device may set a determination threshold in the target determination area, taking the manner of setting the target determination area as an example of setting the determination area in a circular ring shape, if the electronic device determines that the determination area in which the first input event is located is a P1 area in the first detection area from the plurality of determination areas, the electronic device may acquire the target determination threshold of the first motion sensor 931 corresponding to the P1 area as M1. If the electronic device determines, from the plurality of determination areas, that the determination area in which the first input event is located is a P2 area in the first detection area, the electronic device may acquire that the target determination threshold of the second motion sensor 932 corresponding to the P2 area is N1.
In one implementation, when the first input event is located in the first detection area 921, the electronic device may determine whether the motion data of the first motion sensor 931 is greater than or equal to a target determination threshold. If the motion data of the first motion sensor 931 is greater than or equal to the target determination threshold, the electronic device determines that the first input event is a finger joint screen capture. If the motion data of the first motion sensor 931 is less than the target decision threshold, the electronic device determines that the first input event is not a finger joint screen capture.
For example, the electronic device may set a corresponding target determination threshold according to the kind of the motion data of the first motion sensor 931. The target determination threshold M1 may be set according to acceleration data when the motion data is acceleration data, and the target determination threshold M1 may be set according to data of a gyroscope or a geomagnetic sensor when the motion data is data of a geomagnetic sensor. Wherein the motion data of the motion sensor further comprises a signal gradient (Grad) parameter, and the tapping signal generated by the first input event at different positions of the first display screen 911 varies due to the fixed device position of the motion sensor. When the motion data is a signal gradient parameter, the target determination threshold M1 may be set according to the signal gradient parameter.
It should be noted that, since the motion data of the first motion sensor 931 includes a plurality of types, the electronic device may set a corresponding determination threshold in the target determination region according to each type of the motion data, and in the embodiment of the present application, the electronic device may determine the motion data according to one or more target determination thresholds.
Fig. 19 is a schematic diagram of a variation of gradient parameters of different position signals according to an embodiment of the present application. Taking the setting mode of the target determination area as an example, taking a circular ring-shaped setting determination area as the setting mode of the target determination area, when a user taps on the first detection area 921 with the same tap strength, if the user taps on the P3 area, the signal gradient parameters of the electronic device on the P3 area are shown in fig. 19A; if the user clicks in the P4 area, the signal gradient parameters of the electronic device in the P4 area are shown in fig. 19B; if the user taps in the P5 area, the signal gradient parameters of the electronic device in the P5 area are shown in fig. 19C. The electronic equipment has different signal gradient parameters at different positions with the same knocking force, so that the electronic equipment can more accurately and further judge the first input event by setting corresponding judging thresholds in different judging areas.
As an example, further shown in fig. 19, the signal gradient parameter of the electronic device in the P3 region is 20800, the signal gradient parameter of the electronic device in the P4 region is 12264, and the signal gradient parameter of the electronic device in the P5 region is 6116. The judgment threshold set by the electronic equipment in the P3 area is 12000, the judgment threshold set by the electronic equipment in the P4 area is 10000, and the judgment threshold set by the electronic equipment in the P5 area is 5000. The electronic device can further judge the first input event according to the signal gradient parameters of the areas and the judging threshold value.
In one implementation, when the first input event is located in the second detection region 922, the electronic device may determine whether the motion data of the second motion sensor 932 is greater than or equal to the target-determination threshold. If the motion data of the second motion sensor 932 is greater than or equal to the target-determination threshold, the electronic device determines that the first input event is a finger-joint screen capture. If the motion data of the second motion sensor 932 is less than the target-determination threshold, the electronic device determines that the first input event is not a finger-joint screen capture.
It should be noted that, the specific distinguishing manner of the first input event in the second detection area 922 is similar to the specific distinguishing manner of the first input event in the first detection area 921, and the embodiments of the present application are not repeated here.
In one implementation, the electronic device may set the target determination threshold to a first sub-threshold and a second sub-threshold, wherein the electronic device determines whether the motion data of the first motion sensor 931 is greater than or equal to the first sub-threshold and determines whether the motion data of the second motion sensor 932 is greater than or equal to the second sub-threshold.
Wherein if the motion data of the first motion sensor 931 is greater than or equal to a first sub-threshold and the motion data of the second motion sensor 932 is greater than or equal to a second sub-threshold, the electronic device determines that the first input event is a finger joint screen capture; if the motion data of the first motion sensor 931 is less than a first sub-threshold and/or the motion data of the second motion sensor 932 is less than a second sub-threshold, the electronic device determines that the first input event is not a finger joint screen capture.
As further shown in fig. 18, the electronic device may set a first sub-threshold and a second sub-threshold in the target determination area, and if the electronic device determines that the determination area in which the first input event is located is a P1 area in the first detection area 921 from the plurality of determination areas, the electronic device may acquire that the first sub-threshold of the first motion sensor 931 corresponding to the P1 area is M1 and the second sub-threshold of the second motion sensor 932 corresponding to the P1 area is M2. If the electronic device determines, from the plurality of determination areas, that the determination area in which the first input event is located is a P2 area in the second detection area 922, the electronic device may acquire that the second sub-threshold of the second motion sensor 932 corresponding to the P2 area is N1, and the first sub-threshold of the first motion sensor 931 is N2. The electronic device is capable of jointly determining the first input event based on the two determination thresholds.
The user typically does not tap the edges of the first display screen 911 with the knuckles while taking the knuckle screen shots. Operations performed by the user at the edge of the first display 911 typically include clicking a virtual key such as close, return, etc. If the electronic device misrecognizes the operation of the virtual key such as closing, returning, etc. by clicking the edge of the first display screen 911 as a finger joint screen capture, the user experience is affected. Based on this, the electronic device sets at least one invalid input region 941 on the first display 911, and further determines whether the first input event is a finger joint screenshot by determining whether the first input event is within the invalid input region 941.
FIG. 20 is a schematic diagram of an invalid input region according to an embodiment of the present application. As shown in fig. 20, the electronic device sets at least one invalid input area 941 on the first display screen, and when the first predicted result is that the first input event is a finger joint screenshot, the electronic device further determines whether the first input event is located in the invalid input area 941. If the first input event is located in the inactive input area 941, the electronic device determines that the first input event is not a finger joint screen capture.
By way of example, the coordinates of the inactive input area 941 provided by the electronic device on the first display screen 911 include: (80, 100), (80, 2240), (1000, 100), and (1000, 2240). When the coordinate data of the first input event is (1100, 2300), the electronic device may determine that the first input event is located in the invalid input region 941, and further determine that the first input event is not a finger joint screen capture.
According to the method provided by the embodiment of the application, the electronic equipment can accurately judge whether the user performs the finger joint screen capturing on the first display screen 911, so that the problems of low success rate of the finger joint screen capturing and high false touch rate are avoided, and the use experience of the user is improved.
Because the first display screen 911 of the foldable electronic device is in the folded state, the first display screen 911 faces the user, and the first display screen 911 faces away from the user when the foldable electronic device is in the folded state, in order to conform to the usage habit of the user, the electronic device is provided with the second display screen 912 on the side of the first body 101 facing away from the first display screen 911, and the electronic device further needs to configure the screen capturing method provided by the embodiment of the present application based on the second display screen 912.
FIG. 21 is another exemplary flowchart of a screen capture method provided by an embodiment of the present application.
As shown in fig. 21, the method may further include the steps of:
in step S906, a second input event of the user to the second display 912 is acquired, and the second display 912 corresponds to the first motion sensor 931.
The second display 912 may be a touch screen laid on the first body 101, and the second display 912 shares the first motion sensor 931 with the first display 911. When the user clicks/taps the first display screen 911 on the second display screen 912 through the tip of a finger, the abdomen, the joints of the finger, or through a stylus, the electronic device can detect a second input event occurring on the second display screen 912.
After the electronic device detects the second input event occurring on the second display 912, the second input event is obtained via the input subsystem.
In one implementation, the electronic device determines whether the second display 912 is in a bright screen state; responding to a second input event if the second display 912 is in a bright screen state; if the second display 912 is not in the bright screen state, it does not respond to the second input event.
Step S907, in response to the second input event, acquiring motion data of the first motion sensor 931;
step S908, inputting the capacitance data, the coordinate data and the motion data of the first motion sensor 931 of the second input event to the finger joint screen capture confirmation model to obtain a second prediction result of the finger joint screen capture confirmation model;
step S909, judging whether the second input event is a finger joint screen capture according to the second prediction result.
In one implementation, the second display 912 includes a plurality of decision regions sequentially distributed along a direction away from the first motion sensor 931; each judgment area corresponds to a judgment threshold value; judging whether the second input event is a finger joint screen capture according to the second prediction result, including: judging whether the motion data of the first motion sensor 931 is greater than or equal to a target judgment threshold, which is a judgment threshold of a target judgment region; determining that the input event is a finger joint screen capture if the motion data of the first motion sensor 931 is greater than or equal to a target decision threshold; if the motion data of the first motion sensor 931 is less than the target decision threshold, it is determined that the input event is not a finger joint screen capture.
In one implementation, the second display 912 includes at least one inactive input area; judging whether the second input event is a finger joint screen capture according to the second prediction result, including: if the second predicted result is a target result, judging whether the second input event is positioned in an invalid input area, wherein the target result comprises that the second input event is a finger joint screen capture; if the second input event is located in the invalid input region, it is determined that the second input event is not a finger joint screen capture. The specific implementation manner of the embodiment of the present application is similar to that of the first display screen 911, so that the content that is not specifically developed in the embodiment of the present application is implemented by referring to the related embodiment of the first display screen 911, and will not be described herein.
According to the method provided by the embodiment of the application, the electronic equipment can accurately judge whether the user performs the finger joint screen capturing on the second display screen 912, so that the problems of low success rate of the finger joint screen capturing and high false touch rate are avoided, and the use experience of the user is improved.
In the embodiment provided by the application, the schemes of the screen capturing method provided by the application are introduced from the aspects of the electronic equipment and the interaction between the electronic equipment and a user. It will be appreciated that the electronic device, in order to achieve the above-described functions, includes corresponding hardware structures and/or software modules that perform the respective functions. Those of skill in the art will readily appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as hardware or combinations of hardware and computer software. Whether a function is implemented as hardware or computer software driven hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
Fig. 22 is a schematic structural diagram of a screen capturing device according to an embodiment of the present application.
In some embodiments, the electronic device may implement the corresponding functions by the hardware apparatus shown in fig. 22. As shown in fig. 22, the screen capture apparatus may include: a memory 2201 and a processor 2202.
In one implementation, the processor 2202 may include one or more processing units, e.g., the processor 2202 may include an application processor, a controller, a video codec, a digital signal processor, and/or a neural network processor, etc., where the different processing units may be separate devices or may be integrated in one or more processors. A memory 2201 is coupled to the processor 2202 for storing various software programs and/or sets of instructions. In some embodiments, memory 2201 may include volatile memory and/or nonvolatile memory.
In some embodiments, the software programs and/or sets of instructions in the memory 2201, when executed by the processor 2202, cause the electronic device to perform the method steps of: acquiring a first input event of a user to a first display screen, wherein the first display screen comprises a plurality of detection areas, and each detection area comprises at least one motion sensor; determining a target area in which the first input event is located from a plurality of detection areas in response to the first input event; acquiring motion data of a target sensor, wherein the target sensor at least comprises a motion sensor in a target area; inputting the capacitance data, the coordinate data and the motion data of the target sensor of the first input event into the finger joint screen capture confirmation model to obtain a first prediction result of the finger joint screen capture confirmation model; and judging whether the first input event is a finger joint screen capture according to the first prediction result. In this way, the electronic device can determine the target area where the first input event is located, and obtain the first prediction result through the capacity value data, the coordinate data and the motion data of the target sensor of the first input event. Therefore, the electronic equipment can accurately judge that the user is carrying out the finger joint screen capturing through the first prediction result, so that the problems of low success rate of the finger joint screen capturing and high false touch rate are avoided, and the user experience is improved.
In some embodiments, the software programs and/or sets of instructions in the memory 2201, when executed by the processor 2202, cause the electronic device to perform the method steps of: judging whether the first display screen is in a bright screen state or not; responding to a first input event if the first display screen is in a bright screen state; if the first display is not in the bright screen state, the first input event is not responded. By adopting the embodiment, the electronic equipment can respond to the first input event only when the first display screen is in the bright screen state, so that the user is prevented from touching by mistake.
In some embodiments, the software programs and/or sets of instructions in the memory 2201, when executed by the processor 2202, cause the electronic device to perform the method steps of: if the target area is a first detection area, acquiring motion data of a first motion sensor, wherein the first motion sensor is a motion sensor in the first detection area; and if the target area is a second detection area, acquiring motion data of a second motion sensor, wherein the second motion sensor is a motion sensor in the second detection area. By adopting the embodiment, when the first input event is located in different target areas, the electronic device can detect the first input event through different target sensors so as to acquire more accurate motion data to be input into the finger joint screen capture confirmation model, and the accurate judgment of the finger joint screen capture of the user is facilitated.
In some embodiments, the software programs and/or sets of instructions in the memory 2201, when executed by the processor 2202, cause the electronic device to perform the method steps of: acquiring motion data of a first motion sensor and motion data of a second motion sensor; the first motion sensor is a motion sensor in a first detection area, and the second motion sensor is a motion sensor in a second detection area. By adopting the embodiment, the electronic equipment can detect the first input event through the two motion sensors so as to acquire more accurate motion data and input the more accurate motion data into the finger joint screen capture confirmation model, so that the user can accurately judge that the finger joint screen capture is performed.
In some embodiments, the first display screen includes a plurality of decision regions, the plurality of decision regions being distributed sequentially along a direction away from the first motion sensor and the second motion sensor; each judgment area corresponds to a judgment threshold value; the software programs and/or sets of instructions in the memory 2201, when executed by the processor 2202, cause the electronic device to perform the method steps of: if the first predicted result is a target result, determining a target judging area where the first input event is located from a plurality of judging areas, wherein the target result comprises that the first input event is a finger joint screen capture; and judging whether the first input event is a finger joint screen capture or not according to the motion data of at least one motion sensor and a target judgment threshold value, wherein the target judgment threshold value is a judgment threshold value of a target judgment area. According to the method and the device, the user can accurately distinguish whether the first input event is the finger joint screen capture or not by setting different judging thresholds in different judging areas because the motion data acquired by the motion sensor are different in the process of the finger joint screen capture in different judging areas through the same knocking force.
In some embodiments, the software programs and/or sets of instructions in the memory 2201, when executed by the processor 2202, cause the electronic device to perform the method steps of: if the target area is the first detection area, judging whether the motion data of the first motion sensor is larger than or equal to a target judgment threshold value; if the motion data of the first motion sensor is greater than or equal to the target determination threshold, determining that the first input event is a finger joint screen capture; if the motion data of the first motion sensor is less than the target determination threshold, it is determined that the first input event is not a finger joint screen capture. By adopting the embodiment, the electronic equipment can accurately distinguish whether the first input event of the first detection area is the finger joint screen capture through the first motion sensor.
In some embodiments, the software programs and/or sets of instructions in the memory 2201, when executed by the processor 2202, cause the electronic device to perform the method steps of: if the target area is the second detection area, judging whether the motion data of the second motion sensor is larger than or equal to a target judgment threshold value; if the motion data of the second motion sensor is greater than or equal to the target judgment threshold, determining that the first input event is a finger joint screen capture; if the motion data of the second motion sensor is less than the target determination threshold, it is determined that the first input event is not a finger joint screen capture. By adopting the embodiment, the electronic equipment can accurately distinguish whether the first input event of the second detection area is the finger joint screen capture through the second motion sensor.
In some embodiments, the software programs and/or sets of instructions in the memory 2201, when executed by the processor 2202, cause the electronic device to perform the method steps of: judging whether the motion data of the first motion sensor is larger than or equal to a first sub-threshold value, and judging whether the motion data of the second motion sensor is larger than or equal to a second sub-threshold value; if the motion data of the first motion sensor is greater than or equal to a first sub-threshold and the motion data of the second motion sensor is greater than or equal to a second sub-threshold, determining that the first input event is a finger joint screen capture; if the motion data of the first motion sensor is less than the first sub-threshold and/or the motion data of the second motion sensor is less than the second sub-threshold, it is determined that the first input event is not a finger joint screen capture. By adopting the embodiment, the electronic equipment can accurately distinguish whether the first input event is the finger joint screen capture or not through the first motion sensor and the second motion sensor.
In some embodiments, the first display screen includes at least one inactive input area; the software programs and/or sets of instructions in the memory 2201, when executed by the processor 2202, cause the electronic device to perform the method steps of: if the first predicted result is a target result, judging whether the first input event is positioned in an invalid input area, wherein the target result comprises that the first input event is a finger joint screen capture; if the first input event is located in the invalid input region, it is determined that the first input event is not a finger joint screen capture. By adopting the embodiment, the electronic device can avoid the user from touching the edge of the first display screen by judging whether the first input event is located in the invalid input area.
In some embodiments, the software programs and/or sets of instructions in the memory 2201, when executed by the processor 2202, cause the electronic device to perform the method steps of: acquiring a second input event of a user to a second display screen, wherein the second display screen corresponds to the first motion sensor; acquiring motion data of the first motion sensor in response to a second input event; inputting the capacitance data, the coordinate data and the motion data of the first motion sensor of the second input event to the finger joint screen capture confirmation model to obtain a second prediction result of the finger joint screen capture confirmation model; and judging whether the second input event is a finger joint screen capture according to the second prediction result. By adopting the embodiment, the electronic device can determine the target area where the second input event is located, and acquire the second prediction result through the capacitance value data, the coordinate data and the motion data of the first sensor of the second input event. Therefore, the electronic equipment can accurately judge that the user is performing the finger joint screen capture on the second display screen through the second prediction result, so that the problems of low success rate of the finger joint screen capture and high false touch rate are avoided, and the use experience of the user is improved.
In some embodiments, the software programs and/or sets of instructions in the memory 2201, when executed by the processor 2202, cause the electronic device to perform the method steps of: judging whether the second display screen is in a bright screen state or not; responding to a second input event if the second display screen is in a bright screen state; if the second display is not in the bright screen state, the second input event is not responded. By adopting the embodiment, the electronic equipment can respond to the second input event only when the second display screen is in the bright screen state, so that the user is prevented from touching by mistake.
In some embodiments, the second display screen includes a plurality of decision regions, the plurality of decision regions being sequentially distributed along a direction away from the first motion sensor; each judgment area corresponds to a judgment threshold value; the software programs and/or sets of instructions in the memory 2201, when executed by the processor 2202, cause the electronic device to perform the method steps of: judging whether the motion data of the first motion sensor is larger than or equal to a target judgment threshold value, wherein the target judgment threshold value is a judgment threshold value of a target judgment area; if the motion data of the first motion sensor is greater than or equal to the target judgment threshold, determining that the input event is a finger joint screen capture; if the motion data of the first motion sensor is less than the target determination threshold, it is determined that the input event is not a finger joint screen capture. By adopting the embodiment, the electronic equipment can more accurately distinguish whether the second input event is the finger joint screen capture or not by setting different judging thresholds in different judging areas.
In some embodiments, the second display screen includes at least one inactive input area; the software programs and/or sets of instructions in the memory 2201, when executed by the processor 2202, cause the electronic device to perform the method steps of: if the second predicted result is a target result, judging whether the second input event is positioned in an invalid input area, wherein the target result comprises that the second input event is a finger joint screen capture; if the second input event is located in the invalid input region, it is determined that the second input event is not a finger joint screen capture. By adopting the embodiment, the electronic device can avoid the user from touching the edge of the second display screen by judging whether the second input event is in the invalid input area.
The application also provides a chip system. The system-on-a-chip comprises a processor for supporting the apparatus or device to implement the functions involved in the above aspects, e.g. to generate or process information involved in the above methods. In one possible design, the system on a chip further includes a memory for storing program instructions and data necessary for the apparatus or device described above. The chip system can be composed of chips, and can also comprise chips and other discrete devices.
Embodiments of the present application also provide a computer-readable storage medium having stored therein program instructions that, when executed on a computer, cause the computer to perform the methods of the above aspects and implementations thereof.
Embodiments of the present application also provide a computer program product which, when run on a computer, causes the computer to perform the methods of the above aspects and implementations thereof.
The foregoing detailed description of the application has been presented for purposes of illustration and description, and it should be understood that the foregoing is by way of illustration and description only, and is not intended to limit the scope of the application.

Claims (16)

1. A method of screen capturing, comprising:
acquiring a first input event of a user to a first display screen, wherein the first display screen is a folding screen, the first display screen comprises a plurality of detection areas, each detection area comprises at least one motion sensor, and each detection area covers a machine body;
determining a target area where the first input event is located from the detection areas in response to the first input event;
acquiring motion data of a target sensor, wherein the target sensor at least comprises a motion sensor in the target area;
inputting the capacitance data, the coordinate data of the first input event and the motion data of the target sensor into a finger joint screen capture confirmation model to obtain a first prediction result of the finger joint screen capture confirmation model;
and judging whether the first input event is a finger joint screen capture according to the first prediction result.
2. The screen capture method of claim 1, wherein the determining, in response to the first input event, from the plurality of detection areas, a target area in which the first input event is located is preceded by:
Judging whether the first display screen is in a bright screen state or not;
responding to the first input event if the first display screen is in the bright screen state;
and if the first display screen is not in the bright screen state, not responding to the first input event.
3. The method of capturing a screen of claim 2, wherein the first display screen includes a first detection area and a second detection area, the acquiring motion data of a target sensor, the target sensor including at least a motion sensor in the target area, comprising:
if the target area is the first detection area, acquiring motion data of a first motion sensor, wherein the first motion sensor is a motion sensor in the first detection area;
and if the target area is the second detection area, acquiring motion data of a second motion sensor, wherein the second motion sensor is a motion sensor in the second detection area.
4. The screen capture method of claim 2, wherein the first display screen comprises a first detection area and a second detection area; the acquiring motion data of a target sensor, where the target sensor includes at least a motion sensor in the target area, includes:
Acquiring motion data of a first motion sensor and motion data of a second motion sensor;
wherein the first motion sensor is a motion sensor in the first detection region and the second motion sensor is a motion sensor in the second detection region.
5. The screen capturing method of claim 3 or 4, wherein the first display screen includes a plurality of decision areas, the plurality of decision areas being sequentially distributed along a direction away from the first motion sensor and the second motion sensor; each judgment area corresponds to a judgment threshold value; the judging whether the first input event is a finger joint screen capture according to the first prediction result includes:
if the first predicted result is a target result, determining a target judgment area in which the first input event is located from a plurality of judgment areas, wherein the target result comprises that the first input event is a finger joint screen capture;
judging whether the first input event is a finger joint screen capture or not according to the motion data of at least one motion sensor and a target judgment threshold value, wherein the target judgment threshold value is a judgment threshold value of the target judgment area.
6. The method of claim 5, wherein determining whether the first input event is a finger joint screenshot based on the motion data of at least one of the motion sensors and a target determination threshold comprises:
if the target area is the first detection area, judging whether the motion data of the first motion sensor is larger than or equal to the target judgment threshold value;
determining that the first input event is the finger joint screen capture if the motion data of the first motion sensor is greater than or equal to the target determination threshold;
if the motion data of the first motion sensor is less than the target determination threshold, determining that the first input event is not the knuckle screen capture.
7. The method of claim 5, wherein determining whether the first input event is a finger joint screenshot based on the motion data of at least one of the motion sensors and a target determination area, comprises:
if the target area is the second detection area, judging whether the motion data of the second motion sensor is larger than or equal to the target judgment threshold value;
Determining that the first input event is the finger joint screen capture if the motion data of the second motion sensor is greater than or equal to the target determination threshold;
if the motion data of the second motion sensor is less than the target determination threshold, determining that the first input event is not the knuckle screen capture.
8. The method of claim 5, wherein the target determination threshold comprises a first sub-threshold and a second sub-threshold, and wherein determining whether the first input event is a finger joint screenshot based on the motion data of at least one of the motion sensors and the target determination region comprises:
judging whether the motion data of the first motion sensor is larger than or equal to the first sub-threshold value, and judging whether the motion data of the second motion sensor is larger than or equal to the second sub-threshold value;
determining that the first input event is the finger joint screen capture if the motion data of the first motion sensor is greater than or equal to the first sub-threshold and the motion data of the second motion sensor is greater than or equal to the second sub-threshold;
if the motion data of the first motion sensor is less than the first sub-threshold and/or the motion data of the second motion sensor is less than the second sub-threshold, determining that the first input event is not the finger joint screen capture.
9. The method of any of claims 6-8, wherein the first display screen includes at least one inactive input area; the judging that the first input event is a finger joint screen capture according to the first prediction result includes:
if the first predicted result is a target result, judging whether the first input event is positioned in the invalid input area, wherein the target result comprises that the first input event is a finger joint screen capture;
and if the first input event is in the invalid input area, determining that the first input event is not the finger joint screen capture.
10. A screen capture method as claimed in claim 3, further comprising:
acquiring a second input event of a user to a second display screen, wherein the second display screen corresponds to the first motion sensor;
acquiring motion data of the first motion sensor in response to the second input event;
inputting the capacitance data, the coordinate data and the motion data of the first motion sensor of the second input event to the finger joint screen capture confirmation model to obtain a second prediction result of the finger joint screen capture confirmation model;
And judging whether the second input event is the finger joint screen capture according to the second prediction result.
11. The method of claim 10, wherein the capturing motion data of the first motion sensor in response to the second input event further comprises, prior to:
judging whether the second display screen is in a bright screen state or not;
responding to the second input event if the second display screen is in the bright screen state;
and if the second display screen is not in the bright screen state, not responding to the second input event.
12. The screen capture method of claim 11, wherein the second display screen comprises a plurality of decision regions, the plurality of decision regions being sequentially distributed along a direction away from the first motion sensor; each judgment area corresponds to a judgment threshold value; the judging whether the second input event is the finger joint screen capture according to the second prediction result comprises:
judging whether the motion data of the first motion sensor is larger than or equal to a target judgment threshold value, wherein the target judgment threshold value is a judgment threshold value of a target judgment area;
determining that the input event is the finger joint screen capture if the motion data of the first motion sensor is greater than or equal to the target determination threshold;
If the motion data of the first motion sensor is less than the target determination threshold, determining that the input event is not the finger joint screen capture.
13. The method of any of claims 10-12, wherein the second display screen includes at least one inactive input area; the judging whether the second input event is the finger joint screen capture according to the second prediction result comprises:
if the second predicted result is a target result, judging whether the second input event is positioned in the invalid input area, wherein the target result comprises that the second input event is a finger joint screen capture;
and if the second input event is in the invalid input area, determining that the second input event is not the finger joint screen capture.
14. The method of screen capture of claim 1, wherein the motion sensor comprises at least one or more of: acceleration sensor, gyroscope, geomagnetic sensor.
15. The screen capturing method of any of claims 6, 7, 8, 12, wherein the shape of the decision region includes at least one or more of: circular ring, square ring, rectangle.
16. An electronic device, comprising: a processor and a memory; the memory stores program instructions that, when executed by the processor, cause the electronic device to perform the screen capture method of any of claims 1-15.
CN202210806250.9A 2022-07-08 2022-07-08 Screen capturing method and electronic equipment Active CN116048350B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210806250.9A CN116048350B (en) 2022-07-08 2022-07-08 Screen capturing method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210806250.9A CN116048350B (en) 2022-07-08 2022-07-08 Screen capturing method and electronic equipment

Publications (2)

Publication Number Publication Date
CN116048350A CN116048350A (en) 2023-05-02
CN116048350B true CN116048350B (en) 2023-09-08

Family

ID=86114042

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210806250.9A Active CN116048350B (en) 2022-07-08 2022-07-08 Screen capturing method and electronic equipment

Country Status (1)

Country Link
CN (1) CN116048350B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105320436A (en) * 2015-07-07 2016-02-10 崔景城 Method for triggering screen capturing by tapping screen with finger joint
CN108089808A (en) * 2017-11-29 2018-05-29 努比亚技术有限公司 A kind of screen-picture acquisition methods, terminal and computer readable storage medium
CN109358793A (en) * 2018-09-27 2019-02-19 维沃移动通信有限公司 A kind of screenshotss method and mobile terminal
CN109857306A (en) * 2018-12-27 2019-06-07 维沃移动通信有限公司 Screenshotss method and terminal device
CN110308855A (en) * 2019-06-28 2019-10-08 华为技术有限公司 A kind of interactive operation method and device based on collapsible terminal
CN110413167A (en) * 2019-07-19 2019-11-05 珠海格力电器股份有限公司 A kind of the screenshotss method and terminal device of terminal device
CN110597580A (en) * 2019-07-23 2019-12-20 珠海格力电器股份有限公司 Screen capturing method and device
CN110727489A (en) * 2019-09-16 2020-01-24 咪咕文化科技有限公司 Screenshot image generation method, electronic device and computer-readable storage medium
WO2021018274A1 (en) * 2019-07-31 2021-02-04 华为技术有限公司 Screen projection method and electronic device

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103473012A (en) * 2013-09-09 2013-12-25 华为技术有限公司 Screen capturing method, device and terminal equipment
CN104978117B (en) * 2014-04-11 2018-11-09 阿里巴巴集团控股有限公司 A kind of method and apparatus for realizing screenshotss
CN105653281B (en) * 2015-12-29 2019-09-17 青岛海信移动通信技术股份有限公司 A kind of method and apparatus carrying out screenshotss in a mobile device
US10783320B2 (en) * 2017-05-16 2020-09-22 Apple Inc. Device, method, and graphical user interface for editing screenshot images
CN108205412B (en) * 2017-11-09 2019-10-11 中兴通讯股份有限公司 A kind of method and apparatus for realizing screenshotss
CN113504866A (en) * 2019-02-22 2021-10-15 华为技术有限公司 Screen control method, electronic device and storage medium
CN110597439B (en) * 2019-08-29 2021-06-15 Oppo广东移动通信有限公司 Screen capture method and device, electronic equipment and computer readable medium
US11188359B2 (en) * 2019-09-20 2021-11-30 Samsung Electronics Co., Ltd. Electronic device and screen capturing method thereof

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105320436A (en) * 2015-07-07 2016-02-10 崔景城 Method for triggering screen capturing by tapping screen with finger joint
CN108089808A (en) * 2017-11-29 2018-05-29 努比亚技术有限公司 A kind of screen-picture acquisition methods, terminal and computer readable storage medium
CN109358793A (en) * 2018-09-27 2019-02-19 维沃移动通信有限公司 A kind of screenshotss method and mobile terminal
CN109857306A (en) * 2018-12-27 2019-06-07 维沃移动通信有限公司 Screenshotss method and terminal device
CN110308855A (en) * 2019-06-28 2019-10-08 华为技术有限公司 A kind of interactive operation method and device based on collapsible terminal
CN110413167A (en) * 2019-07-19 2019-11-05 珠海格力电器股份有限公司 A kind of the screenshotss method and terminal device of terminal device
CN110597580A (en) * 2019-07-23 2019-12-20 珠海格力电器股份有限公司 Screen capturing method and device
WO2021018274A1 (en) * 2019-07-31 2021-02-04 华为技术有限公司 Screen projection method and electronic device
CN110727489A (en) * 2019-09-16 2020-01-24 咪咕文化科技有限公司 Screenshot image generation method, electronic device and computer-readable storage medium

Also Published As

Publication number Publication date
CN116048350A (en) 2023-05-02

Similar Documents

Publication Publication Date Title
CN115866121B (en) Application interface interaction method, electronic device and computer readable storage medium
CN113115439B (en) Positioning method and related equipment
EP3819174A1 (en) Business processing method and device
CN113728295B (en) Screen control method, device, equipment and storage medium
CN113641271B (en) Application window management method, terminal device and computer readable storage medium
CN112087649B (en) Equipment searching method and electronic equipment
CN116048358B (en) Method and related device for controlling suspension ball
CN116156417A (en) Equipment positioning method and related equipment thereof
CN116723257A (en) Image display method and electronic equipment
CN114201738B (en) Unlocking method and electronic equipment
CN115914461B (en) Position relation identification method and electronic equipment
CN114172596B (en) Channel noise detection method and related device
CN116048350B (en) Screen capturing method and electronic equipment
CN114812381B (en) Positioning method of electronic equipment and electronic equipment
CN117009005A (en) Display method, automobile and electronic equipment
CN115032640A (en) Gesture recognition method and terminal equipment
CN116232959B (en) Network quality detection method and device
CN116054298B (en) Charging method and electronic equipment
CN115880198B (en) Image processing method and device
CN115175164B (en) Communication control method and related device
CN116095223B (en) Notification display method and terminal device
CN116339510B (en) Eye movement tracking method, eye movement tracking device, electronic equipment and computer readable storage medium
CN115016666B (en) Touch processing method, terminal equipment and storage medium
CN114764300B (en) Window page interaction method and device, electronic equipment and readable storage medium
CN116709023B (en) Video processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant