CN113625865B - Screen state control method and electronic equipment - Google Patents

Screen state control method and electronic equipment Download PDF

Info

Publication number
CN113625865B
CN113625865B CN202010379242.1A CN202010379242A CN113625865B CN 113625865 B CN113625865 B CN 113625865B CN 202010379242 A CN202010379242 A CN 202010379242A CN 113625865 B CN113625865 B CN 113625865B
Authority
CN
China
Prior art keywords
touch screen
user
sensing area
preset
opening
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010379242.1A
Other languages
Chinese (zh)
Other versions
CN113625865A (en
Inventor
涂永峰
付滇
胡燕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202010379242.1A priority Critical patent/CN113625865B/en
Priority to PCT/CN2021/085662 priority patent/WO2021223560A1/en
Publication of CN113625865A publication Critical patent/CN113625865A/en
Application granted granted Critical
Publication of CN113625865B publication Critical patent/CN113625865B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/0412Digitisers structurally integrated in a display
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/0414Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means using force sensing means to determine a position
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/0416Control or interface arrangements specially adapted for digitisers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/044Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by capacitive means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Abstract

The embodiment of the application provides a control method of a screen state and electronic equipment, in the method, the opening and closing state of a foldable touch screen starts to change, operation of a user is detected in a preset sensing area, the touch screen is set to be in an disabling state, the disabling state is that the touch screen is not responsive to the touch operation of the user, when the effective time of the operation is over or the change of the opening and closing state is over, the disabling state of the touch screen is over, the effective time of the operation is the duration of the disabling state preset for the operation, and therefore false touch operation of the user on the touch screen can be reduced when the user opens and closes the touch screen in an activated state.

Description

Screen state control method and electronic equipment
Technical Field
The application relates to the technical field of intelligent terminals, in particular to a control method of screen states and electronic equipment.
Background
In order to bring a good visual experience to the user, the folding screen technology has been preliminarily realized. However, with the continuous expansion of the size of the touch screen, the frame is continuously reduced, and when a user performs an opening or closing operation of the touch screen on an electronic device with a foldable touch screen, such as a mobile phone, a tablet computer, etc., if the touch screen is in an activated state, the contact area between the user and the touch screen is large, and the contact times are frequent, so that the false touch operation is easily caused on the touch screen.
Disclosure of Invention
The application provides a control method of screen state and electronic equipment, which can reduce false touch operation of a user on a touch screen when the user opens and closes the touch screen in an activated state.
In a first aspect, the present application provides a method for controlling a screen state, including:
the folding state of the foldable touch screen starts to change, the operation of a user is detected in a preset sensing area, the touch screen is set to be in a non-enabling state, and the non-enabling state is that the touch screen does not respond to the touch operation of the user;
and ending the disabled state of the touch screen when the effective time of the operation is ended or the change of the opening and closing state is ended, wherein the effective time of the operation is the duration of the disabled state preset for the operation.
Among them, the user's operations include, but are not limited to: pressing, single-finger touching, two-finger touching, double clicking, pressing preset time, single-finger touching preset time, or two-finger touching preset time, etc.
According to the control method for the screen state, when the user opens and closes the touch screen in the activated state, the false touch operation of the user on the touch screen can be reduced.
The method can be applied to electronic equipment with a foldable touch screen, wherein the electronic equipment can comprise a mobile terminal (mobile phone), a wearable device, an intelligent screen, an unmanned aerial vehicle, an intelligent network car (Intelligent Connected Vehicle; hereinafter referred to as ICV), an intelligent car or a vehicle-mounted device.
In one possible implementation, the preset sensing region includes: an entity sensing area, and/or a virtual sensing area; the physical sensing area is an entity area which is arranged outside the touch screen and can detect the operation of a user, and the virtual sensing area is an area on the touch screen. The physical sensing area may be provided on a side of the electronic device, and/or on an upper side of the electronic device, and/or under the electronic device, and/or on a back side of the electronic device, and/or on a front side of the electronic device, etc. In one possible implementation, the physical sensing area is preferably disposed in an area that conforms to the user's usage or holding habits. In one possible implementation, to make the use more convenient for the user, the virtual sensing area may be disposed at an edge position of the touch screen. It should be noted that, the virtual sensing area may be hidden in a normal state, and is not perceived by the user when the touch screen is normally displayed, and only when the opening and closing state of the touch screen is changed, the virtual sensing area performs the sensing function to detect whether the user performs the preset operation in the virtual sensing area.
In one possible implementation, the preset sensing region includes: after the opening and closing state of the foldable touch screen starts to change, the virtual sensing area further comprises:
Acquiring a preset virtual sensing area, and acquiring a display scene on the touch screen when the opening and closing state of the touch screen starts to change;
accordingly, detecting the user operation in the preset sensing area includes:
selecting a first virtual sensing area which is not overlapped with a User Interface (UI) control in a display scene from the acquired virtual sensing areas;
the user's operation is detected in the first virtual sensing area.
The implementation can distinguish the virtual sensing area from the UI control, so that the electronic device can distinguish whether the operation of the user aims at the virtual sensing area or the UI control.
In one possible implementation manner, the method for presetting the virtual sensing area includes:
displaying a drawing interface to a user;
and acquiring a region drawn in the drawing interface by a user, and setting the region as a virtual sensing region.
In one possible implementation, presenting the drawing interface to the user includes:
sampling touch operation of a user on the touch screen in the process of changing the opening and closing state of the touch screen;
forming a thermodynamic diagram or a regional diagram according to the sampled touch operation;
when presenting the drawing interface to the user, a thermodynamic diagram or an area diagram is presented to the user. In one possible implementation, the thermodynamic diagram or region diagram may be presented to the user as a background diagram of the drawing interface.
In one possible implementation manner, the method for presetting the virtual sensing area includes:
dividing the touch screen into a plurality of subareas;
sampling touch operation of a user on the touch screen in the process of changing the opening and closing state of the touch screen, and counting the number of times that each sub-area obtained by dividing is touched by the touch operation;
and selecting a preset number of sub-areas as virtual sensing areas according to the number of times each sub-area is touched and in the order from high to low.
In one possible implementation, the preset sensing region includes: after the opening and closing state of the foldable touch screen starts to change, the virtual sensing area further comprises:
acquiring a display scene on the touch screen when the opening and closing state of the touch screen starts to change, and acquiring a virtual sensing area corresponding to the display scene; wherein, the virtual sensing area corresponding to the display scene is preset;
accordingly, detecting the user operation in the preset sensing area includes:
and detecting the operation of the user in the virtual sensing area corresponding to the display scene.
The implementation can distinguish the virtual sensing area from the UI control in the display scene, so that the electronic equipment can distinguish whether the operation of the user aims at the virtual sensing area or the UI control.
In one possible implementation, after setting the touch screen to the disabled state, the method further includes:
and displaying a preset User Interface (UI) effect on the touch screen, wherein the preset UI effect is used for prompting a user that the touch screen is in a disabled state.
In one possible implementation, after setting the touch screen to the disabled state, the method further includes:
and in a preset time period after the disabled state of the touch screen is ended, maintaining the UI effect on the touch screen.
In one possible implementation, the method further includes:
and in the process of changing the opening and closing state, if the opening and closing state of the touch screen is detected to change in the opposite direction to the current changing direction, ending the process triggered by the user in the current changing process.
In a second aspect, the present application provides an electronic device, comprising:
a touch screen; one or more processors; a memory; and one or more computer programs, wherein the one or more computer programs are stored in the memory, the one or more computer programs comprising instructions that, when executed by the device, cause the device to perform the steps of:
the folding state of the foldable touch screen starts to change, the operation of a user is detected in a preset sensing area, the touch screen is set to be in a non-enabling state, and the non-enabling state is that the touch screen does not respond to the touch operation of the user;
And ending the disabled state of the touch screen when the effective time of the operation is ended or the change of the opening and closing state is ended, wherein the effective time of the operation is the duration of the disabled state preset for the operation.
In one possible implementation, the preset sensing region includes: an entity sensing area, and/or a virtual sensing area; the physical sensing area is an entity area which is arranged outside the touch screen and can detect the operation of a user, and the virtual sensing area is an area on the touch screen.
In one possible implementation, the preset sensing region includes: after the virtual sensing area and the instruction are executed by the device, the device executes the steps of starting to change the opening and closing state of the foldable touch screen, and then executing the following steps:
acquiring a preset virtual sensing area, and acquiring a display scene on the touch screen when the opening and closing state of the touch screen starts to change;
accordingly, the instructions, when executed by the device, cause the device to perform the step of detecting the user's operation in the preset sensing region, comprise:
selecting a first virtual sensing area which is not overlapped with a User Interface (UI) control in a display scene from the acquired virtual sensing areas;
The user's operation is detected in the first virtual sensing area.
In one possible implementation, the instructions, when executed by the device, cause the device to preset the virtual sensing region by:
displaying a drawing interface to a user;
and acquiring a region drawn in the drawing interface by a user, and setting the region as a virtual sensing region.
In one possible implementation, the instructions, when executed by the device, cause the device to perform the step of presenting the drawing interface to the user, comprise:
sampling touch operation of a user on the touch screen in the process of changing the opening and closing state of the touch screen;
forming a thermodynamic diagram or a regional diagram according to the sampled touch operation;
when presenting the drawing interface to the user, a thermodynamic diagram or an area diagram is presented to the user.
In one possible implementation, the instructions, when executed by the device, cause the device to preset the virtual sensing region by:
dividing the touch screen into a plurality of subareas;
sampling touch operation of a user on the touch screen in the process of changing the opening and closing state of the touch screen, and counting the number of times that each sub-area obtained by dividing is touched by the touch operation;
and selecting a preset number of sub-areas as virtual sensing areas according to the number of times each sub-area is touched and in the order from high to low.
In one possible implementation, the preset sensing region includes: after the virtual sensing area and the instruction are executed by the device, the device executes the steps of starting to change the opening and closing state of the foldable touch screen, and then executing the following steps:
acquiring a display scene on the touch screen when the opening and closing state of the touch screen starts to change, and acquiring a virtual sensing area corresponding to the display scene; wherein, the virtual sensing area corresponding to the display scene is preset;
accordingly, the instructions, when executed by the device, cause the device to perform the step of detecting the user's operation in the preset sensing region, comprise:
and detecting the operation of the user in the virtual sensing area corresponding to the display scene.
In one possible implementation, the instructions, when executed by the device, cause the device to perform the step of setting the touch screen to the disabled state, further perform the step of:
and displaying a preset User Interface (UI) effect on the touch screen, wherein the preset UI effect is used for prompting a user that the touch screen is in a disabled state.
In one possible implementation, the instructions, when executed by the device, cause the device to perform the step of setting the touch screen to the disabled state, further perform the step of:
And in a preset time period after the disabled state of the touch screen is ended, maintaining the UI effect on the touch screen.
In one possible implementation, the instructions, when executed by the device, cause the device to further perform the steps of:
and in the process of changing the opening and closing state, if the opening and closing state of the touch screen is detected to change in the opposite direction to the current changing direction, ending the process triggered by the user in the current changing process.
In a third aspect, the present application provides a computer readable storage medium having a computer program stored therein, which when run on a computer causes the computer to perform the method of any of the first aspects described above.
In a fourth aspect, the present application provides a computer program for performing the method of any one of the first aspects when the computer program is executed by a computer.
In one possible design, the program in the fourth aspect may be stored in whole or in part on a storage medium packaged with the processor, or in part or in whole on a memory not packaged with the processor.
Drawings
FIG. 1 is an exemplary diagram of an electronic device folded inwardly and outwardly in accordance with an embodiment of the present application;
FIG. 2A is a flow chart of one embodiment of a method for controlling screen status of the present application;
FIG. 2B is a diagram illustrating an example relationship between a foldable touch screen and a rotating mechanism according to an embodiment of the present disclosure;
fig. 2C is an exemplary diagram of an operation of a user during a change of an open/close state of the electronic device according to an embodiment of the present application;
FIG. 3A is a flowchart of another embodiment of a method for controlling screen status of the present application;
FIGS. 3B-3E are diagrams illustrating exemplary placement positions of virtual sensing regions according to embodiments of the present application;
FIG. 3F is an exemplary diagram of possible finger positions during folding in accordance with an embodiment of the present application;
FIGS. 4A to 4E are views illustrating UI effects according to embodiments of the present application;
FIG. 5 is an exemplary diagram of an open/close state of the present embodiment being changed in a direction opposite to the current direction of change;
FIG. 6A is a flowchart of a control method for screen state according to another embodiment of the present application;
FIGS. 6B-6D are diagrams illustrating the positions of virtual sensing regions according to embodiments of the present application;
FIG. 6E is a diagram illustrating an example virtual sensing area setup interface according to an embodiment of the present application;
FIG. 7A is a flowchart of a control method for screen state according to another embodiment of the present application;
FIG. 7B is a diagram illustrating user habits according to an embodiment of the present application;
FIG. 7C is a diagram illustrating an interface drawn according to an embodiment of the present application;
FIG. 8A is a flowchart of a control method for screen state according to yet another embodiment of the present application;
FIG. 8B is an exemplary diagram illustrating a thermodynamic diagram at a drawing interface in accordance with an embodiment of the present application;
FIG. 8C is a diagram illustrating division of sub-regions of a touch screen according to an embodiment of the present application;
FIG. 8D is an exemplary diagram illustrating a region diagram at a drawing interface in accordance with an embodiment of the present application;
FIG. 9 is a flowchart of a control method for screen state according to another embodiment of the present application;
FIG. 10A is a flowchart of a control method for screen state according to yet another embodiment of the present application;
FIG. 10B is a diagram illustrating an example of the interface UI icon and virtual sensing region location according to an embodiment of the present disclosure;
FIG. 11 is a flowchart of a control method for screen state according to another embodiment of the present application;
FIG. 12 is a block diagram of one embodiment of a control device for screen state of the present application;
fig. 13 is a schematic structural diagram of an embodiment of an electronic device of the present application.
Detailed Description
The terminology used in the description section of the present application is for the purpose of describing particular embodiments of the present application only and is not intended to be limiting of the present application.
First, related terms in the embodiments of the present application are described by way of example, but not limitation.
The electronic device with the touch screen, in particular to an electronic device with a touch display screen, wherein the touch screen in the electronic device can detect a user finger or other objects placed on the surface of the electronic device and can identify the placement position of the finger or the objects.
The electronic device with a foldable touch screen specifically refers to an electronic device with a foldable touch screen, wherein the foldable touch screen in the electronic device can be an integrally formed flexible screen, can also be a spliced screen composed of a plurality of flexible screens and a hinge between every two flexible screens, can be a spliced screen composed of a plurality of rigid screens and a flexible screen between every two rigid screens, can also be a spliced screen composed of a plurality of rigid screens and a hinge between every two rigid screens, and the like.
Referring to fig. 1, the electronic device having a foldable touch screen, the folding of the touch screen may include two folding forms, an inward folding of the touch screen on the inner side after folding and an outward folding of the touch screen on the outer side after folding. The folding of the electronic device is generally consistent with the folding of the touch screen, such as shown in fig. 1.
An electronic device having a foldable touch screen has different states based on whether the touch screen is folded or not, the touch screen being unfolded (see section 11 or section 14 in fig. 1), and the touch screen being fully folded being called a fully folded state (see section 13 or section 16 in fig. 1).
In this embodiment of the present application, the change of the folding angle of the touch screen in the direction from the unfolded state to the fully folded state is referred to as closing of the touch screen. The closing of the touch screen may be from an unfolded state to a fully folded state of the touch screen, see sections 11 to 13 in fig. 1, or, see sections 14 to 16 in fig. 1; it is also possible to change the folding angle in this direction, for example from part 11 to part 12 in fig. 1, or from part 12 to part 13 in fig. 1, or from part 14 to part 15 in fig. 1, or from part 15 to part 16 in fig. 1, etc.
In this embodiment of the present application, the change of the folding angle of the touch screen in the direction from the fully folded state to the unfolded state is referred to as the opening of the touch screen. Similar to the closing of the touch screen, the opening of the touch screen may be from a fully folded state to an unfolded state, or may be a change of a certain folding angle in this direction.
The touch screen is opened or closed, and the embodiment of the application is collectively called as the opening and closing state change of the touch screen.
In the prior art, along with the continuous expansion of the size of a touch screen on an electronic device, the frame of the electronic device is continuously reduced, when a user performs an opening or closing operation on the electronic device with a foldable touch screen, no matter the electronic device is folded inwards or outwards, especially when the electronic device is folded outwards, the hand of the user is difficult to not contact with the touch screen, the contact area is large generally, the contact frequency is frequent, and if the touch screen is in an activated state, the false touch operation is easily caused on the touch screen. Therefore, the embodiment of the application provides a control method of a screen state and an electronic device, which can reduce the false touch operation of a user on a touch screen when the user opens or closes the electronic device with the touch screen in an activated state.
The embodiment of the application can be applied to electronic equipment with a foldable touch screen, and the electronic equipment can comprise, but is not limited to, a mobile phone, a tablet personal computer and the like.
Hereinafter, a control method of the screen state in the embodiment of the present application will be described.
Fig. 2A is a flowchart of one embodiment of a method for controlling a screen state of the present application, as shown in fig. 2A, the method for controlling a screen state may include:
step 201: the electronic equipment detects that the opening and closing state of the touch screen starts to change, detects the operation of a user in a preset sensing area, and sets the touch screen to be in a disabled state, wherein the disabled state is that the touch screen does not respond to the touch operation of the user.
Referring to fig. 2B, the touch screen is an integrally formed flexible screen or a spliced screen, and a rotating mechanism is generally provided at a foldable portion of the touch screen to support the opening or closing of the touch screen, for example, the rotating structure of the spliced screen may be the hinge, and when the rotating structure moves, it is indicated that the opening and closing state of the foldable touch screen changes, that is, the opening and closing state of the electronic device changes. Based on this, in one possible implementation, the electronic device may detect whether the opening and closing state of the touch screen changes by detecting whether the rotating mechanism of the touch screen generates motion, for example, a hall sensor may be used to detect whether the rotating mechanism generates motion.
Optionally, the detection of the user operation in the preset sensing area is generally performed during the process of changing the open-close state.
Wherein the preset sensing area is an area in which an operation of the user can be detected. The preset sensing region may include: an entity sensing area, and/or a virtual sensing area; the physical sensing area is an entity area which is arranged outside the touch screen and can detect the operation of a user, and the virtual sensing area is an area on the touch screen.
The user's operations may include, but are not limited to: pressing, single-finger touching, two-finger touching, double clicking, pressing preset time, single-finger touching preset time, or two-finger touching preset time, etc.
Step 202: and ending the disabled state of the touch screen when the effective time of the operation of the user is ended or the change of the opening and closing state of the touch screen is ended.
The embodiment of the application is not limited, for example, the touch screen may be set according to the state of the touch screen when the change of the open-close state of the touch screen is terminated, for example, if the change of the open-close state of the touch screen is terminated, the touch screen is in a fully folded state after being folded inwards, as shown in part 16 in fig. 1, and at this time, the user cannot use the touch screen, and the touch screen may still be set to the disabled state.
Alternatively, when the valid time of the operation of the user ends or the change of the open-close state of the touch screen ends, the above-mentioned disabled state of ending the touch screen may be achieved by setting the touch screen to the active state or other states. The active state and the disabled state are opposite, and the active state is that the touch screen responds to touch operation of a user.
The method comprises the following three conditions: one is that the effective time of the operation of the user ends, but the change of the opening and closing state is not terminated, ending the disabled state of the touch screen; the other condition is that the effective time of the operation of the user is not ended, the change of the opening and closing state is ended, and the disabled state of the touch screen is ended; the third case is that the effective time of the user operation is ended and the change of the open-close state is ended, ending the disabled state of the touch screen.
The valid time of the user operation refers to the duration of the disabled state preset for the user operation.
In one possible implementation, if the operation of the user is an operation with no time limitation, such as pressing, single-finger touch, or two-finger touch, the effective time of the operation of the user may be: and in the time of executing the operation by the user, namely in the process that the opening and closing state of the touch screen is changed by the electronic equipment, detecting the operation of the user in a preset sensing area, setting the touch screen to be in an enabling state, detecting the end of the operation of the user, and ending the enabling state of the touch screen. From the perspective of the user, the user performs an operation in the preset sensing area, the touch screen is not enabled, the operation of the user in the preset sensing area is finished, and the disabled state of the touch screen is also finished.
For example, referring to fig. 2C, assuming that the user operates to press, the touch screen is folded outwards, the entire change of the open/close state of the touch screen is changed from the unfolded state to the fully folded state, the user presses the preset sensing area 211 during the folding from the unfolded state shown in part 21 to the folded angle shown in part 22, and the user does not press the preset sensing area 211 during the folding from the folded angle shown in part 22 to the fully folded state; then, the electronic device detects a pressing operation of the user in a preset sensing area in the process of folding from the unfolded state shown in the part 21 to the folding angle shown in the part 22, and sets the touch screen to a disabled state; in the process of folding from the folding angle shown in the part 22 to the fully folded state, the electronic device does not detect the pressing operation of the user in the preset sensing area, and ends the disabled state of the touch screen, and sets the touch screen to be in the activated state.
In another possible implementation manner, if the operation of the user is an operation with a time limitation, such as double click, pressing for a preset time, single-finger touch for a preset time, or two-finger touch for a preset time, the effective time of the operation of the user may be: a preset time period after the end of the operation by the user, or a termination of the change from the end of the operation by the user to the open-close state, etc. And when the effective time of the operation of the user is from the end of the operation of the user to the termination of the change of the opening and closing state, detecting the operation of the user in a preset sensing area in the process of changing the opening and closing state of the electronic equipment, setting the touch screen to be in a non-enabled state until the termination of the change of the opening and closing state, and setting the touch screen to be in an activated state. From the perspective of the user, the user performs an operation in the preset sensing area, and the touch screen is not enabled until the change of the opening and closing state is finished.
For example, referring to fig. 2C, assuming that the user operates to press for 1s, the entire change of the open/close state of the touch screen is changed from the unfolded state to the fully folded state, and the user presses the preset sensing area 211 during the process of folding from the unfolded state shown in the portion 21 to the folded angle shown in the portion 22, the duration of the process of folding from the unfolded state shown in the portion 21 to the folded angle shown in the portion 22 is 1.5s; during the folding from the folding angle shown in section 22 to the fully folded state, the user does not press the preset sensing region 211; then, the electronic device detects a pressing operation of the user in a preset sensing area in the process of folding from the unfolded state shown in the part 21 to the folding angle shown in the part 22, and the pressing duration is 1.5s, and sets the touch screen to a disabled state; the electronic device still sets the touch screen to the disabled state in the event that the user's pressing operation is not detected in the preset sensing area during the folding from the folding angle shown in part 22 to the fully folded state.
Optionally, after setting the touch screen to the disabled state, the embodiment of the present application may further include:
And displaying a preset User Interface (UI) effect on the touch screen, wherein the preset UI effect is used for prompting the user that the touch screen is in a disabled state.
Optionally, the UI effect may be kept displayed on the touch screen within a preset period after the disabled state of the touch screen ends.
Optionally, in the process of changing the opening and closing state of the touch screen, if it is detected that the opening and closing state of the touch screen changes in the opposite direction to the current changing direction, ending the process triggered by the user in the current changing process.
In the control method of the screen state shown in fig. 2A, the electronic device detects that the open-close state of the foldable touch screen changes, if the operation of the user is detected in the preset sensing area during the change of the open-close state, the touch screen on the electronic device is set to be in an disabled state, and when the effective time of the operation of the user is over or the change of the open-close state is terminated, the disabled state of the touch screen is ended, so that when the user opens or closes the electronic device with the touch screen in an active state, the false touch operation of the user on the touch screen can be reduced.
Fig. 3A is a flowchart of another embodiment of a control method for screen status of the present application, as shown in fig. 3A, where the control method for screen status may include:
Step 301: an entity sensing area is set on the electronic device in advance.
The entity sensing area is generally disposed on the electronic device when the electronic device is manufactured.
Wherein, the physical sensing area may be set in: a non-touch screen area of the electronic device; for example, referring to fig. 3B and 3C, the physical sensing area may be disposed on a left side of the electronic device, and/or a right side of the electronic device, and/or an upper side of the electronic device, and/or a lower side of the electronic device, and/or a back side of the electronic device, and/or a front side of the electronic device, etc. Here, the front surface of the electronic device is the surface where the touch screen is located. The location, shape, size, etc. of the physical sensing area in fig. 3B and 3C are only examples, and are not intended to limit the physical sensing area in the embodiments of the present application.
Wherein the number of physical sensing areas may be one or more. For example, referring to fig. 3B, the physical sensing area may be disposed at any one or more of the front side of the electronic device, 6 areas near the touch screen, and/or the physical sensing area may be disposed at any one or more of the 3 areas of the left side of the electronic device, and/or the physical sensing area may be disposed at any one or more of the 3 areas of the right side of the electronic device, and/or the physical sensing area may be disposed at any one or more of the 3 areas of the upper side of the electronic device, and/or the physical sensing area may be disposed at any one or more of the 3 areas of the lower side of the electronic device, and/or the physical sensing area may be disposed at any one or more of the 8 areas of the back side of the electronic device, as shown in fig. 3C.
In one possible implementation, the number of physical sensing areas may be 1 or 2, considering cost and compact design style. For example, referring to fig. 3D, the number of the entity sensing regions 211 is set to 1 and is disposed at the upper side of the electronic device, thereby saving costs and ensuring a simple design style of the front side of the electronic device.
In another possible implementation, the physical sensing area may be provided in an area that conforms to the user's usage or holding habits. For example, referring to fig. 3E, if the user holds the right thumb at the middle of the right side of the electronic device during the change of the opening/closing state of the touch screen, the physical sensing area may be disposed on the front of the electronic device and near the middle area 212 of the touch screen area.
In yet another possible implementation, the placement of the physical sensing area may also take into account whether the electronic device is folded in or out. For example, if the electronic device is folded inwards, the physical sensing area may be disposed on the back of the electronic device, if the electronic device is folded outwards, the physical sensing area may be disposed on the front of the electronic device, by which, in the case that the touch screen cannot be enabled only by continuously pressing the physical sensing area, the effect of setting the physical sensing area is prevented from being affected because the angle is too small during the folding process of the touch screen, as shown in fig. 3F, the finger pressed on the physical sensing area must be separated before the folding is not completed.
The physical sensing area may be a push switch, or a button, or a tactile area. In one possible implementation, if the physical sensing area is a touch sensing area and the number is 1, the physical sensing area may be combined with the fingerprint identifier, so that the electronic device may combine the process of fingerprint authentication in a subsequent step, increasing the security of the electronic device.
Step 302: the electronic device detects whether the opening and closing state of the touch screen is changed, and if so, step 303 is executed; otherwise, step 302 is continued.
Step 303: in the process of changing the opening and closing state of the touch screen, the electronic device detects the operation of the user in a preset entity sensing area, and if the operation of the user is detected, step 304 is executed; if no user operation is detected, step 305 is performed.
If the number of the physical sensing areas is greater than 1, step 304 may be executed if a user operation is detected in one physical sensing area; however, in practical applications, it may be preset that the operation of detecting the user in 2 or more entity sensing areas is required to identify that the operation of detecting the user in the entity sensing areas is performed, and step 304 is performed.
Step 304: the electronic equipment sets the touch screen to be in a disabled state until the effective time of the operation of a user is over or the change of the opening and closing state of the touch screen is terminated; and displaying a preset User Interface (UI) effect on the touch screen until the disabled state of the touch screen is finished or a preset time length after the disabled state of the touch screen is finished, and ending the branch flow.
After the effective time of the user operation is over or the change of the opening and closing state of the touch screen is over, the touch screen can be set to be in an activated state.
The preset UI effect is used for prompting a user that the touch screen is in a non-enabled state.
The limiting effect of the preset time length until the disabled state of the touch screen is ended or until the disabled state of the touch screen is ended is as follows: the display of the UI effect may disappear immediately when the disabled state of the touch screen ends, or may disappear after a preset time period is delayed, where the specific value of the preset time period is not limited in the embodiment of the present application.
Wherein the UI effect may include, but is not limited to:
popping up notification message effects and corresponding notification message disappearing effects; for example, as shown in fig. 4A, a notification message pops up in the middle of the touch screen, and after a certain period of time is displayed, the notification message disappears;
Or, the water surface ripple effect and the corresponding ripple vanishing effect; for example, as shown in fig. 4B, the ripple effect gradually extends from the middle of the touch screen to the periphery, and after a certain period of time is displayed, the ripple effect gradually contracts to the middle and disappears;
or, a locking icon effect and a corresponding locking icon vanishing effect appear; wherein, the effect of locking the icon can be: displaying the locked lock at one or more positions of the touch screen; the lock icon vanishing effect may be: the locking icon directly disappears, or an opened lock is displayed at one or more positions of the touch screen; the location where the lock occurs may include, but is not limited to: side, corner, center of the touch screen, and/or near the location where the user's finger touches the touch screen, etc.; for example, as shown in fig. 4C, a lock icon appears in the middle of the touch screen, and after a certain period of time is displayed, the lock icon disappears; or, for example, as shown in fig. 4D, the locked lock is displayed at the two corners of the lower left corner and the upper right corner of the touch screen, and after a certain period of time, the unlocked lock disappears;
or simulating a drawer pull-out effect and a corresponding simulated drawer closing effect; for example, as shown in fig. 4E, the simulated drawer pull-out effect may be: a semitransparent bar frame is pulled out from the edges of the two sides of the touch screen respectively; accordingly, the simulated drawer retraction effect may be: retracting the semitransparent bar frames from the two side edges; the translucent bar frame is shown in fig. 4E as a diagonally filled bar frame.
When the UI effect is displayed on the touch screen, the UI effect may be displayed on the entire touch screen, for example, as shown in fig. 4B and fig. 4E, or may be displayed only in a partial area of the touch screen, for example, as shown in fig. 4A, fig. 4C and fig. 4D.
Step 305: and in the process of changing the opening and closing state of the touch screen, if the electronic equipment detects that the opening and closing state of the touch screen changes in the opposite direction to the current changing direction, ending the process triggered by the user in the current opening and closing state changing process.
Based on the above description of the opening and closing of the touch screen, the opening and closing of the touch screen are two processes with opposite changing directions. The electronic device detecting that the opening and closing state of the touch screen is changed in the opposite direction to the current change direction means that the touch screen is detected to be changed from open to closed or from closed to open. For example, referring to fig. 5, when the touch screen is folded from the unfolded state of the portion 51 to the folded angle of the portion 52 in fig. 5, if the touch screen is unfolded from the folded angle of the portion 52 to the unfolded state of the portion 53, the electronic device detects that the open-close state of the touch screen is changed in the opposite direction to the current change in the process of changing the portion 52 from the closed state to the open state. If the opening and closing state of the touch screen is detected by detecting whether the rotating mechanism of the touch screen generates motion, the electronic device detecting that the opening and closing state of the touch screen is changed in opposite directions to the current changing direction may include: the electronic device detects that the rotating mechanism of the touch screen generates movement in the opposite direction.
Through the step, if the user does not operate the preset sensing area or the user's operation does not fall in the preset sensing area and mistakenly touches the link on the touch screen in the folding process, a new process is triggered, and the user can cancel the new process through the opposite direction change of the opening and closing state.
Optionally, when the number of the entity sensing areas in step 301 is 1 and the preset operation in step 303 is a touch operation, so that the fingerprint identifier can identify the fingerprint of the user, step 303 may further include: if the electronic device detects the operation of the user, acquiring the fingerprint of the user in the entity sensing area, comparing the fingerprint of the user with the pre-stored fingerprint, and correspondingly, if the electronic device detects the operation of the user and the comparison result is consistent, executing step 304, otherwise, executing step 305. The specific how to obtain the fingerprint of the user and how to compare the fingerprints are not described in detail in the present application. By means of fingerprint comparison, folding operation and disabling operation aiming at the entity sensing area can be guaranteed to be executed by a machine owner of the electronic equipment, and safety of the electronic equipment is improved.
Different from the embodiment of fig. 3A in which the entity sensing area is preset, the method of controlling the screen state of the embodiment of fig. 6A may also preset the virtual sensing area; the specific step flow of the embodiment of the present application shown in fig. 6A is shown in fig. 3A and the related description, and is not repeated herein, but the difference is mainly that the physical sensing area is replaced by the virtual sensing area.
Hereinafter, a virtual sensing area in the embodiment of the present application is described.
The virtual sensing area in the embodiment of the application is located on the touch screen. The virtual sensing area may be disposed in any area of the touch screen, and the size, shape and number of the virtual sensing areas are not limited in the embodiments of the present application.
It should be noted that, the virtual sensing area is hidden in the normal state, and may not be perceived by the user when the touch screen is normally displayed, and the electronic device may not detect the operation of the user related to disabling the touch screen in the virtual sensing area, and only when the electronic device detects that the opening and closing state of the touch screen changes, the electronic device performs the sensing function in the virtual sensing area, and detects whether the user performs the operation of triggering disabling the touch screen in the virtual sensing area.
In one possible implementation, to make the use more convenient for the user, the virtual sensing area may be disposed at an edge position of the touch screen. For example, referring to fig. 6B, the virtual sensing area may be disposed at one or more areas among 4 corners of the touch screen; and/or, referring to fig. 6C, the virtual sensing region may be disposed at one or both of two edge regions near the left and right edges of the touch screen, as shown by the diagonal line filled region; and/or, as shown in fig. 6D, the virtual sensing area may be disposed at one or more areas of the 6 edge areas near the left and right edges of the touch screen. The positions, shapes, and sizes of the virtual sensing regions in fig. 6B to 6D are merely examples, and are not intended to limit the virtual sensing regions according to the embodiments of the present application.
In one possible implementation, 1 or more virtual sensing areas may be preset in the electronic device by a designer and manufacturer of the electronic device, for example, taking fig. 6D as an example, 6 areas in fig. 6D are preset in the electronic device to be virtual sensing areas; or in another possible implementation manner, a designer and a producer of the electronic device may set a plurality of virtual sensing area options in the electronic device in advance, as shown in fig. 6E, 8 virtual sensing area options are given, one or more virtual sensing area options conforming to their own use habits are selected by a user on a virtual sensing area setting interface of the electronic device, and after the user selects and confirms, the electronic device sets a virtual sensing area according to the virtual sensing area option selected by the user; it should be noted that, the user may modify the preset virtual sensing area in the electronic device at the virtual sensing area setting interface of the electronic device, and the embodiment of the present application is not limited.
In the embodiment of the present application, when the UI effect is displayed on the touch screen, the UI effect may be displayed on the entire touch screen or a part of the touch screen, except that the UI effect may be similar to the embodiment shown in fig. 3A; in this embodiment of the present application, the UI effect may also be displayed only in a preset virtual sensing area, and even only in a virtual sensing area in which the operation of the user is detected. For example, if the preset virtual sensing area on the electronic device is area 1 to area 6 in fig. 6D, and the user operation is detected in area 1 and area 5, in the embodiment of the present application, UI effects may be displayed in all areas 1 to 6, or UI effects may be displayed only in area 1 and area 5, which is not limited in the embodiment of the present application.
When the UI effect is displayed in the virtual sensing area, the UI effect may refer to the description in the embodiment shown in fig. 3A, or the UI effect displayed in the virtual sensing area may further include, but is not limited to: the special display of the virtual sensing area, such as color, brightness, transparency, contrast, etc. of the virtual sensing area is changed, or the dynamic effect, such as the simulated drawer pull-out effect, or the water surface ripple effect, etc. described in the embodiment of fig. 3A is superimposed on the virtual sensing area.
In contrast to the embodiment shown in fig. 6A in which the virtual sensing area is preset by the electronic device, or the user selects the preset virtual sensing area option, in order to better match and satisfy the usage habit of the user, in the embodiment of the present application shown in fig. 7A, the user is allowed to autonomously hand-draw the virtual sensing area. At this time, as shown in fig. 7A, step 601 may further include, based on the embodiment of the present application shown in fig. 6A, step 701:
step 701: the electronic equipment displays a drawing interface for a user and acquires a region drawn in the drawing interface by the user;
accordingly, step 601 may include: the electronic equipment presets a virtual sensing area according to the area drawn by the user.
The number of the areas drawn by the user in the drawing interface may be 1 or more, and the embodiment of the application is not limited.
Referring to fig. 7B, assuming that the user is used to touch the position of the right lower portion of the touch screen with the right hand and touches the left side edge of the touch screen with the left hand when opening and closing the touch screen, the electronic device may provide a virtual sensing area drawing interface for the user, and the user may draw a virtual sensing area similar to that shown in fig. 7C according to the user's own usage habit.
After the setting of the virtual sensing area is completed, the user can redraw the virtual sensing area on a drawing interface provided by the electronic device, or modify the virtual sensing area drawn before, so as to modify the virtual sensing area pre-stored in the electronic device, thereby enabling the virtual sensing area pre-stored in the electronic device to be better matched with the use habit of the user.
In order to enable the user to more accurately and effectively draw the virtual sensing area conforming to the habit of the user on the basis of showing the drawing interface to the user in the embodiment shown in fig. 7A, in the embodiment of the present application shown in fig. 8A, further showing a touch operation thermodynamic diagram or an area diagram to the user in the drawing interface, as shown in fig. 8A, before step 701 shown in fig. 7A, the method may further include the following steps:
step 801: the electronic device samples touch operation on the touch screen during the process of opening or closing the electronic device (namely, during the process of changing the opening and closing state of the touch screen) by a user.
The electronic device can always sample the touch operation of the user when the touch screen is opened or closed, but the power consumption of the electronic device is relatively large, so that the sampling time period or the sampling frequency is preferably limited to reduce the power consumption of the electronic device by the sampling operation.
In one possible implementation, the period of time for sampling may be preset in the electronic device, for example, within one month of initial use of the electronic device, or some fixed period of time per day, etc.; alternatively, the number of samplings may be preset in the electronic device, for example 1000 times.
In another possible implementation, the time period or number of sampling operations may be triggered by the user. For example, the user may set the electronic device to sample for one month from a certain point of time, or set the electronic device to sample for a certain fixed period of time per day, or set the number of times of sampling per day of the electronic device to 100 times, or the like.
Step 802: the electronic device forms a thermodynamic diagram or an area diagram according to the sampled touch operation.
The thermodynamic diagram is generally formed by overlapping areas with different colors to describe the thermal level of the continuous area. In the example shown in fig. 8B, the regions with different thermal values are displayed with different filling effects of the regions, the region with the largest number of times of user touch operation, that is, the region with the highest thermal force is displayed with the diagonal stripe region, the region with the second largest number of times of user touch operation, that is, the region with the highest thermal force is displayed with the diagonal stripe region, and the region with the third largest number of times of user touch operation, that is, the region with the lowest thermal force is displayed with the vertical stripe region.
When the area map is formed, the touch screen can be divided into a plurality of sub-areas, and the sub-areas with the touch times reaching a preset threshold value are drawn to form the area map. The specific threshold value can be set autonomously in practical applications. For example, as shown in fig. 8C, if the touch screen is divided into 16 sub-areas, and the sub-areas in which the number of touches reaches the preset threshold is 5, 8, 9, 12, 4 total areas, the area diagram is drawn as shown in fig. 8D.
Accordingly, step 701 may further include: and when the electronic equipment displays the drawing interface to the user, the thermodynamic diagram or the regional diagram is also displayed to the user.
In one possible implementation, the thermodynamic diagram or region diagram may be presented to the user as a background diagram of a drawing interface, such as shown in fig. 8B and 8D. By displaying the thermodynamic diagram or the regional diagram to the user, the user can more intuitively understand the habit of the user, so that the virtual sensing region conforming to the habit of the user can be more accurately and effectively drawn in step 701.
Unlike the embodiment shown in fig. 6A in which the virtual sensing area is preset by the electronic device, or the user selects a preset virtual sensing area option, another method for setting the virtual sensing area by the electronic device is provided in the embodiment shown in fig. 9. As shown in fig. 9, on the basis of the embodiment shown in fig. 6A, the following steps may be further included before step 601:
Step 901: the electronic equipment performs region division on the touch screen to obtain a plurality of sub-regions.
Step 902: the electronic equipment samples touch operation of a user on the touch screen in the process of opening or closing the electronic equipment, and counts the number of times that each sub-area obtained through dividing is touched by the sampled touch operation respectively.
The setting of the sampling timing may refer to the related description in step 801, which is not described herein.
Step 903: the electronic equipment selects a preset number of sub-areas according to the number of times each sub-area is touched and the order of the times from high to low.
For example, referring to fig. 8C, the electronic device divides the touch screen into 16 sub-areas, samples the touch operation of the user on the touch screen during the process of opening or closing the electronic device for a period of time or for a preset number of times, for example, counts the number of times each sub-area is touched respectively in the touch operation of the user for 100 times of opening or closing the electronic device, and assumes that the number of times of each sub-area is respectively from high to low: the areas 5-80, 12-59, 9-40, and 8-15 … … are not repeated, then the electronic device may select the sub-areas 5 and 12 as virtual sensing areas according to a preset number of virtual sensing areas, e.g., 2.
In another possible implementation, it is also possible not to count the centrally located sub-regions 7 to 12, but only the marginal sub-regions 1 to 6, without limitation. The above division of the touch screen into 16 areas is merely an example, and the number of the divided sub-areas of the touch screen and the division manner in the practical application are not limited in the embodiments of the present application.
Alternatively, a machine learning method may be used to identify the area with the largest number of touches by the user during the opening and closing of the electronic device.
Accordingly, step 601 in the embodiment of the present application may include: the electronic device presets a virtual sensing area according to the selected subarea.
In practical applications, the preset virtual sensing area shown in fig. 6A, the setting method of the three virtual sensing areas in which the user draws the virtual sensing area shown in fig. 7A and 8A and the setting method of the three virtual sensing areas in which the virtual sensing area is set by sampling in the embodiment shown in fig. 9 may be combined, and the user may autonomously decide which setting method of the virtual sensing area is used, and may select to delete the virtual sensing area set by some methods.
Unlike the embodiment shown in fig. 6A, in which the virtual sensing area is preset by the electronic device, in order to prevent the preset virtual sensing area from overlapping with the UI control in the display scene on the touch screen, in the embodiment shown in fig. 10A, the corresponding virtual sensing area is set according to the display scene of the touch screen, as shown in fig. 10A, step 601 is replaced by step 1001, and step 1002 is added between step 602 and step 603, specifically:
Step 1001: the electronic equipment presets virtual sensing areas corresponding to all display scenes.
The method for setting the virtual sensing area of each scene may refer to the method for setting the virtual sensing area in fig. 6A to 9, and is mainly different in that in the embodiment shown in fig. 10A, the virtual sensing area corresponds to a display scene, and different display scenes may correspond to the same or different virtual sensing areas.
Because the user may perform operations of opening or closing the electronic device under any application program interface of the electronic device, such as playing a video or a picture, browsing a web page, or playing a game interface, different virtual sensing areas may be set for different display scenes, and the virtual sensing areas in the scenes may be distinguished from all elements having display or hidden control instructions, such as icons, images, buttons, and the like, in the display scenes, so that the electronic device distinguishes whether the operations of the user in the process of opening or closing the touch screen are disabling operations for the virtual sensing areas or operations performed for the UI controls in the display scenes.
For example, in the display scene shown in fig. 10B, the interface shown in fig. 10B is being displayed, and the square portion in the interface is the UI icon, then if the preset virtual sensing area is the areas 1 to 4 shown in fig. 10B, it is obvious that the areas 1 and 4 will overlap with the UI icon, at this time, the virtual sensing area of the display scene may be set as the areas 2 and 3.
If the determination result in step 602 is yes, step 1002 is executed: in the process of changing the opening and closing state of the touch screen, acquiring a display scene on the touch screen when the opening and closing state of the touch screen starts to change, acquiring a virtual sensing area corresponding to the display scene, and then executing step 603;
accordingly, the virtual sensing area acquired in step 1002 in step 603 detects the operation of the user.
In order to prevent the preset virtual sensing area from overlapping with the UI control in the display scene on the touch screen, in the embodiment shown in fig. 11, the virtual sensing area for detecting the user operation is selected according to the display scene on the touch screen, specifically, as shown in fig. 11, step 1101 is further included between step 602 and step 603 in the embodiment shown in fig. 6A to 9:
step 1101: in the process of changing the opening and closing state of the touch screen, the electronic equipment acquires a preset virtual sensing area and acquires a display scene on the touch screen when the opening and closing state starts to change; a virtual sensing region that is not coincident with the UI control in the display scene is selected from the virtual sensing regions.
In the embodiment of the present application shown in fig. 11, steps 601 to 605 are only shown based on the step flow shown in fig. 6A, and other possible implementation steps may be referred to in fig. 7A to 9, which are not repeated here.
In a possible implementation manner, the selected virtual sensing area may be specifically displayed through a UI effect, so as to prompt the user about how to specifically display the virtual sensing area in the scene, which may be described with reference to the relevant UI effect in fig. 3A and fig. 6A, and will not be described herein.
The virtual sensing area selected according to the display scene in this step may be 1 or more.
Accordingly, in step 603, the electronic device detects the operation of the user in the virtual sensing area selected according to the display scene.
It should be noted that, in the above embodiment, the electronic device may set either the physical sensing area or the virtual sensing area, and in fact, the electronic device may set both the physical sensing area and the virtual sensing area, and the specific implementation method may refer to the above embodiment and will not be described herein.
It is to be understood that some or all of the steps or operations in the above embodiments are merely examples, and embodiments of the present application may also perform other operations or variations of various operations. Furthermore, the various steps may be performed in a different order presented in the above embodiments, and it is possible that not all of the operations in the above embodiments are performed.
Fig. 12 is a block diagram of an embodiment of a control device for screen status of the present application, and as shown in fig. 12, the device 1200 may include:
the detecting unit 1210 is configured to start changing an opening and closing state of the foldable touch screen, and detect a user operation in a preset sensing area;
a state setting unit 1220, configured to detect an operation of the user in a preset sensing area, set the touch screen to a disabled state, where the disabled state is that the touch screen does not respond to the touch operation of the user; and when the effective time of the operation is ended or the change of the opening and closing state is ended, ending the disabled state of the touch screen, wherein the effective time of the operation is the duration of the disabled state preset for the operation.
Optionally, the preset sensing area includes: an entity sensing area, and/or a virtual sensing area; the physical sensing area is an entity area which is arranged outside the touch screen and can detect the operation of a user, and the virtual sensing area is an area on the touch screen.
Optionally, the preset sensing area includes: the apparatus 1200 may further include a virtual sensing region:
the first obtaining unit 1220 is configured to obtain a preset virtual sensing area after the opening and closing state of the foldable touch screen starts to change, and obtain a display scene on the touch screen when the opening and closing state of the touch screen starts to change;
Accordingly, the detection unit 1210 may specifically be configured to: selecting a first virtual sensing area which is not overlapped with a User Interface (UI) control in a display scene from the acquired virtual sensing areas; the user's operation is detected in the first virtual sensing area.
Optionally, the apparatus 1200 may further include:
the first area setting unit is used for displaying a drawing interface to a user; and acquiring a region drawn in the drawing interface by a user, and setting the region as a virtual sensing region.
Alternatively, the area setting unit may specifically be configured to: sampling touch operation of a user on the touch screen in the process of changing the opening and closing state of the touch screen; forming a thermodynamic diagram or a regional diagram according to the sampled touch operation; when presenting the drawing interface to the user, a thermodynamic diagram or an area diagram is presented to the user.
Optionally, the apparatus 1200 may further include:
the second region setting unit is used for dividing the region of the touch screen to obtain a plurality of sub-regions; sampling touch operation of a user on the touch screen in the process of changing the opening and closing state of the touch screen, and counting the number of times that each sub-area obtained by dividing is touched by the touch operation; and selecting a preset number of sub-areas as virtual sensing areas according to the number of times each sub-area is touched and in the order from high to low.
Optionally, the preset sensing area includes: the apparatus 1200 may further include a virtual sensing region:
the second acquisition unit is used for acquiring a display scene on the touch screen when the opening and closing state of the foldable touch screen starts to change after the opening and closing state of the foldable touch screen starts to change, and acquiring a virtual sensing area corresponding to the display scene; wherein, the virtual sensing area corresponding to the display scene is preset;
accordingly, the detection unit 1210 may specifically be configured to: and detecting the operation of the user in the virtual sensing area corresponding to the display scene.
Optionally, the apparatus 1200 may further include: and the effect display unit is used for displaying a preset User Interface (UI) effect on the touch screen after the touch screen is set to be in the disabled state, wherein the preset UI effect is used for prompting a user that the touch screen is in the disabled state.
Optionally, the effect display unit may further be configured to: and in a preset time period after the disabled state of the touch screen is ended, maintaining the UI effect on the touch screen.
Optionally, the apparatus 1200 may further include:
and the process control unit is used for ending the process triggered by the user in the current change process if the change of the opening and closing state of the touch screen opposite to the current change direction is detected in the process of changing the opening and closing state.
The control device 1200 for screen status provided in the embodiment shown in fig. 12 may be used to implement the technical solutions of the method embodiments shown in fig. 2A to 11 of the present application, and the implementation principle and technical effects may be further described with reference to the related descriptions in the method embodiments.
It should be understood that the above division of the units of the screen state control device shown in fig. 12 is merely a division of a logic function, and may be fully or partially integrated into a physical entity or may be physically separated. And these units may all be implemented in the form of software calls through the processing element; or can be realized in hardware; it is also possible that part of the units are implemented in the form of software calls via the processing elements and part of the units are implemented in the form of hardware. For example, the state setting unit may be a processing element that is set up separately, or may be implemented integrally in a certain chip of the electronic device. The implementation of the other units is similar. Furthermore, all or part of these units may be integrated together or may be implemented independently. In implementation, each step of the above method or each unit above may be implemented by an integrated logic circuit of hardware in a processor element or an instruction in the form of software.
For example, the above units may be one or more integrated circuits configured to implement the above methods, such as: one or more specific integrated circuits (Application Specific Integrated Circuit; hereinafter ASIC), or one or more microprocessors (Digital Singnal Processor; hereinafter DSP), or one or more field programmable gate arrays (Field Programmable Gate Array; hereinafter FPGA), etc. For another example, the units may be integrated together and implemented in the form of a System-On-a-Chip (SOC).
Fig. 13 is a schematic structural diagram of an embodiment of an electronic device, as shown in fig. 13, where the electronic device may include: a foldable touch screen; one or more processors; a memory; and one or more computer programs.
Wherein the one or more computer programs are stored in the memory, the one or more computer programs comprising instructions that, when executed by the device, cause the device to perform the steps of:
the folding state of the foldable touch screen starts to change, the operation of a user is detected in a preset sensing area, the touch screen is set to be in a non-enabling state, and the non-enabling state is that the touch screen does not respond to the touch operation of the user;
And when the effective time of the operation is ended or the change of the opening and closing state is ended, ending the disabled state of the touch screen, wherein the effective time of the operation is the duration of the disabled state preset for the operation.
In one possible implementation, the preset sensing region includes: an entity sensing area, and/or a virtual sensing area; the physical sensing area is an entity area which is arranged outside the touch screen and can detect the operation of a user, and the virtual sensing area is an area on the touch screen.
In one possible implementation, the preset sensing region includes: after the virtual sensing area and the instruction are executed by the device, the device executes the steps of starting to change the opening and closing state of the foldable touch screen, and then executing the following steps:
acquiring a preset virtual sensing area, and acquiring a display scene on the touch screen when the opening and closing state of the touch screen starts to change;
accordingly, the instructions, when executed by the device, cause the device to perform the step of detecting the user's operation in the preset sensing region, comprise:
selecting a first virtual sensing area which is not overlapped with a User Interface (UI) control in a display scene from the acquired virtual sensing areas;
The user's operation is detected in the first virtual sensing area.
In one possible implementation, the instructions, when executed by the device, cause the device to preset the virtual sensing region by:
displaying a drawing interface to a user;
and acquiring a region drawn in the drawing interface by a user, and setting the region as a virtual sensing region.
In one possible implementation, the instructions, when executed by the device, cause the device to perform the step of presenting the drawing interface to the user, comprise:
sampling touch operation of a user on the touch screen in the process of changing the opening and closing state of the touch screen;
forming a thermodynamic diagram or a regional diagram according to the sampled touch operation;
when presenting the drawing interface to the user, a thermodynamic diagram or an area diagram is presented to the user.
In one possible implementation, the instructions, when executed by the device, cause the device to preset the virtual sensing region by:
dividing the touch screen into a plurality of subareas;
sampling touch operation of a user on the touch screen in the process of changing the opening and closing state of the touch screen, and counting the number of times that each sub-area obtained by dividing is touched by the touch operation;
and selecting a preset number of sub-areas as virtual sensing areas according to the number of times each sub-area is touched and in the order from high to low.
In one possible implementation, the preset sensing region includes: after the virtual sensing area and the instruction are executed by the device, the device executes the steps of starting to change the opening and closing state of the foldable touch screen, and then executing the following steps:
acquiring a display scene on the touch screen when the opening and closing state of the touch screen starts to change, and acquiring a virtual sensing area corresponding to the display scene; wherein, the virtual sensing area corresponding to the display scene is preset;
accordingly, the instructions, when executed by the device, cause the device to perform the step of detecting the user's operation in the preset sensing region, comprise:
and detecting the operation of the user in the virtual sensing area corresponding to the display scene.
In one possible implementation, the instructions, when executed by the device, cause the device to perform the step of setting the touch screen to the disabled state, further perform the step of:
and displaying a preset User Interface (UI) effect on the touch screen, wherein the preset UI effect is used for prompting a user that the touch screen is in a disabled state.
In one possible implementation, the instructions, when executed by the device, cause the device to perform the step of setting the touch screen to the disabled state, further perform the step of:
And in a preset time period after the disabled state of the touch screen is ended, maintaining the UI effect on the touch screen.
In one possible implementation, the instructions, when executed by the device, cause the device to further perform the steps of:
and in the process of changing the opening and closing state, if the opening and closing state of the touch screen is detected to change in the opposite direction to the current changing direction, ending the process triggered by the user in the current changing process.
The electronic device shown in fig. 13 may be a terminal device or a circuit device built in the terminal device. The apparatus may be used to perform the functions/steps in the methods provided by the embodiments of fig. 2A-11 of the present application.
The electronic device 1300 may include a processor 1310, an external memory interface 1320, an internal memory 1321, a universal serial bus (universal serial bus, USB) interface 1330, a charge management module 1340, a power management module 1341, a battery 1342, an antenna 1, an antenna 2, a mobile communication module 1350, a wireless communication module 1360, an audio module 1370, a speaker 1370A, a receiver 1370B, a microphone 1370C, an earphone interface 1370D, a sensor module 1380, keys 1390, a motor 1391, an indicator 1392, a camera 1393, a display 1394, and a subscriber identification module (subscriber identification module, SIM) card interface 1395, etc. The sensor module 1380 may include, among other things, a pressure sensor 1380A, a gyroscope sensor 1380B, a barometric sensor 1380C, a magnetic sensor 1380D, an acceleration sensor 1380E, a distance sensor 1380F, a proximity light sensor 1380G, a fingerprint sensor 1380H, a temperature sensor 1380J, a touch sensor 1380K, an ambient light sensor 1380L, a bone conduction sensor 1380M, and the like.
It should be understood that the illustrated structure of the embodiments of the present invention does not constitute a particular limitation of the electronic device 1300. In other embodiments of the present application, electronic device 1300 may include more or less components than those illustrated, or may combine certain components, or split certain components, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The processor 1310 may include one or more processing units, such as: the processor 1310 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), a controller, a video codec, a digital signal processor (digital signal processor, DSP), a baseband processor, and/or a neural network processor (neural-network processing unit, NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors.
The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution.
A memory may also be provided in processor 1310 for storing instructions and data. In some embodiments, the memory in processor 1310 is a cache memory. The memory may hold instructions or data that the processor 1310 has just used or recycled. If the processor 1310 needs to reuse the instruction or data, it may be called directly from the memory. Repeated accesses are avoided, reducing the latency of the processor 1310, and thus improving the efficiency of the system.
In some embodiments, the processor 1310 may include one or more interfaces. The interfaces may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous receiver transmitter (universal asynchronous receiver/transmitter, UART) interface, a mobile industry processor interface (mobile industry processor interface, MIPI), a general-purpose input/output (GPIO) interface, a subscriber identity module (subscriber identity module, SIM) interface, and/or a universal serial bus (universal serial bus, USB) interface, among others.
The I2C interface is a bi-directional synchronous serial bus comprising a serial data line (SDA) and a serial clock line (derail clock line, SCL). In some embodiments, the processor 1310 may contain multiple sets of I2C buses. The processor 1310 may be coupled to the touch sensor 1380K, charger, flash, camera 1393, etc., respectively, through different I2C bus interfaces. For example: the processor 1310 may be coupled to the touch sensor 1380K through an I2C interface, such that the processor 1310 communicates with the touch sensor 1380K through an I2C bus interface to implement a touch function of the electronic device 1300.
The I2S interface may be used for audio communication. In some embodiments, the processor 1310 may contain multiple sets of I2S buses. The processor 1310 may be coupled to the audio module 1370 through an I2S bus to enable communication between the processor 1310 and the audio module 1370. In some embodiments, the audio module 1370 may transmit an audio signal to the wireless communication module 1360 through the I2S interface to implement a function of answering a call through the bluetooth headset.
PCM interfaces may also be used for audio communication to sample, quantize and encode analog signals. In some embodiments, the audio module 1370 and the wireless communication module 1360 may be coupled through a PCM bus interface. In some embodiments, the audio module 1370 may also transmit an audio signal to the wireless communication module 1360 through the PCM interface to implement a function of answering a call through the bluetooth headset. Both the I2S interface and the PCM interface may be used for audio communication.
The UART interface is a universal serial data bus for asynchronous communications. The bus may be a bi-directional communication bus. It converts the data to be transmitted between serial communication and parallel communication. In some embodiments, a UART interface is typically used to connect the processor 1310 with the wireless communication module 1360. For example: the processor 1310 communicates with a bluetooth module in the wireless communication module 1360 through a UART interface to implement a bluetooth function. In some embodiments, the audio module 1370 may transmit an audio signal to the wireless communication module 1360 through a UART interface to realize a function of playing music through a bluetooth headset.
The MIPI interface may be used to connect processor 1310 to peripheral devices such as display 1394, camera 1393, etc. The MIPI interfaces include camera serial interfaces (camera serial interface, CSI), display serial interfaces (display serial interface, DSI), and the like. In some embodiments, processor 1310 and camera 1393 communicate via a CSI interface, implementing the photographing functions of electronic device 1300. The processor 1310 communicates with the display screen 1394 via a DSI interface to implement the display functions of the electronic device 1300.
The GPIO interface may be configured by software. The GPIO interface may be configured as a control signal or as a data signal. In some embodiments, a GPIO interface may be used to connect processor 1310 with camera 1393, display 1394, wireless communication module 1360, audio module 1370, sensor module 1380, and the like. The GPIO interface may also be configured as an I2C interface, an I2S interface, a UART interface, an MIPI interface, etc.
The USB interface 1330 is an interface conforming to the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, or the like. The USB interface 1330 may be used to connect a charger to charge the electronic device 1300, or may be used to transfer data between the electronic device 1300 and a peripheral device. And can also be used for connecting with a headset, and playing audio through the headset. The interface may also be used to connect other electronic devices, such as AR devices, etc.
It should be understood that the connection between the modules illustrated in the embodiments of the present invention is merely illustrative, and is not meant to limit the structure of the electronic device 1300. In other embodiments of the present application, the electronic device 1300 may also employ different interfaces in the above embodiments, or a combination of multiple interfaces.
The charge management module 1340 is configured to receive a charge input from a charger. The charger can be a wireless charger or a wired charger. In some wired charging embodiments, the charge management module 1340 may receive charging inputs of a wired charger through the USB interface 1330. In some wireless charging embodiments, the charge management module 1340 may receive wireless charging inputs through a wireless charging coil of the electronic device 1300. The charging management module 1340 charges the battery 1342 and can also supply power to the electronic device through the power management module 1341.
The power management module 1341 is used to connect the battery 1342, the charge management module 1340 and the processor 1310. The power management module 1341 receives input from the battery 1342 and/or the charge management module 1340, and provides power to the processor 1310, the internal memory 1321, the display 1394, the camera 1393, the wireless communication module 1360, and so forth. The power management module 1341 may also be used to monitor battery capacity, battery cycle times, battery health (leakage, impedance) and other parameters. In other embodiments, the power management module 1341 may also be provided in the processor 1310. In other embodiments, the power management module 1341 and the charge management module 1340 may be provided in the same device.
The wireless communication functions of the electronic device 1300 may be implemented by the antenna 1, the antenna 2, the mobile communication module 1350, the wireless communication module 1360, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the electronic device 1300 may be used to cover a single or multiple communication bands. Different antennas may also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed into a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 1350 may provide a solution for wireless communications, including 2G/3G/4G/5G, as applied to the electronic device 1300. The mobile communication module 1350 may include at least one filter, switch, power amplifier, low noise amplifier (low noise amplifier, LNA), etc. The mobile communication module 1350 may receive electromagnetic waves from the antenna 1, filter, amplify the received electromagnetic waves, and transmit the electromagnetic waves to a modem processor for demodulation. The mobile communication module 1350 may also amplify the signal modulated by the modem processor, and convert the signal into electromagnetic waves through the antenna 1 for radiation. In some embodiments, at least some of the functional modules of the mobile communication module 1350 may be disposed in the processor 1310. In some embodiments, at least some of the functional modules of the mobile communication module 1350 may be provided in the same device as at least some of the modules of the processor 1310.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating the low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then transmits the demodulated low frequency baseband signal to the baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then transferred to the application processor. The application processor outputs sound signals through an audio device (not limited to the speaker 1370A, the receiver 1370B, and the like), or displays images or videos through the display screen 1394. In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be provided in the same device as the mobile communication module 1350 or other functional modules, independent of the processor 1310.
The wireless communication module 1360 may provide solutions for wireless communication including wireless local area networks (wireless local area networks, WLAN) (e.g., wireless fidelity (wireless fidelity, wi-Fi) networks), bluetooth (BT), global navigation satellite systems (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), near field wireless communication technology (near field communication, NFC), infrared technology (IR), etc., as applied to the electronic device 1300. The wireless communication module 1360 may be one or more devices integrating at least one communication processing module. The wireless communication module 1360 receives electromagnetic waves via the antenna 2, modulates the electromagnetic wave signals, filters the electromagnetic wave signals, and transmits the processed signals to the processor 1310. The wireless communication module 1360 may also receive signals to be transmitted from the processor 1310, frequency modulate them, amplify them, and convert them to electromagnetic waves for radiation via the antenna 2.
In some embodiments, antenna 1 and mobile communication module 1350 of electronic device 1300 are coupled, and antenna 2 and wireless communication module 1360 are coupled, such that electronic device 1300 may communicate with a network and other devices through wireless communication techniques. The wireless communication techniques may include the Global System for Mobile communications (global system for mobile communications, GSM), general packet radio service (general packet radio service, GPRS), code division multiple access (code division multiple access, CDMA), wideband code division multiple access (wideband code division multiple access, WCDMA), time division code division multiple access (time-division code division multiple access, TD-SCDMA), long term evolution (long term evolution, LTE), BT, GNSS, WLAN, NFC, FM, and/or IR techniques, among others. The GNSS may include a global satellite positioning system (global positioning system, GPS), a global navigation satellite system (global navigation satellite system, GLONASS), a beidou satellite navigation system (beidou navigation satellite system, BDS), a quasi zenith satellite system (quasi-zenith satellite system, QZSS) and/or a satellite based augmentation system (satellite based augmentation systems, SBAS).
The electronic device 1300 implements display functions through a GPU, a display screen 1394, an application processor, and the like. The GPU is a microprocessor for processing images and is connected with the display screen 1394 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 1310 may include one or more GPUs that execute program instructions to generate or change display information.
The display screen 1394 is used for displaying images, videos, and the like. The display screen 1394 includes a display panel. The display panel may employ a liquid crystal display (liquid crystal display, LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (AMOLED) or an active-matrix organic light-emitting diode (matrix organic light emitting diode), a flexible light-emitting diode (flex), a mini, a Micro led, a Micro-OLED, a quantum dot light-emitting diode (quantum dot light emitting diodes, QLED), or the like. In some embodiments, electronic device 1300 may include 1 or N display screens 1394, N being a positive integer greater than 1.
The electronic device 1300 can realize a photographing function through an ISP, a camera 1393, a video codec, a GPU, a display screen 1394, an application processor, and the like.
The ISP is used to process the data fed back by camera 1393. For example, when photographing, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electric signal, and the camera photosensitive element transmits the electric signal to the ISP for processing and is converted into an image visible to naked eyes. ISP can also optimize the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in camera 1393.
Camera 1393 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image onto the photosensitive element. The photosensitive element may be a charge coupled device (charge coupled device, CCD) or a Complementary Metal Oxide Semiconductor (CMOS) phototransistor. The photosensitive element converts the optical signal into an electrical signal, which is then transferred to the ISP to be converted into a digital image signal. The ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into an image signal in a standard RGB, YUV, or the like format. In some embodiments, electronic device 1300 may include 1 or N cameras 1393, N being a positive integer greater than 1.
The digital signal processor is used for processing digital signals, and can process other digital signals besides digital image signals. For example, when the electronic device 1300 is selecting a frequency bin, the digital signal processor is used to fourier transform the frequency bin energy, or the like.
Video codecs are used to compress or decompress digital video. The electronic device 1300 may support one or more video codecs. In this way, the electronic device 1300 may play or record video in a variety of encoding formats, such as: dynamic picture experts group (moving picture experts group, MPEG) 1, MPEG2, MPEG3, MPEG4, etc.
The NPU is a neural-network (NN) computing processor, and can rapidly process input information by referencing a biological neural network structure, for example, referencing a transmission mode between human brain neurons, and can also continuously perform self-learning. Applications such as intelligent awareness of the electronic device 1300 may be implemented by the NPU, for example: image recognition, face recognition, speech recognition, text understanding, etc.
The external memory interface 1320 may be used to connect an external memory card, such as a Micro SD card, to enable expansion of the memory capabilities of the electronic device 1300. The external memory card communicates with the processor 1310 via an external memory interface 1320 to implement data storage functions. For example, files such as music, video, etc. are stored in an external memory card.
The internal memory 1321 may be used to store computer-executable program code that includes instructions. The internal memory 1321 may include a storage program area and a storage data area. The storage program area may store an application program (such as a sound playing function, an image playing function, etc.) required for at least one function of the operating system, etc. The storage data area may store data created during use of the electronic device 1300 (e.g., audio data, phonebook, etc.), and so forth. In addition, the internal memory 1321 may include a high-speed random access memory, and may also include a nonvolatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (universal flash storage, UFS), and the like. The processor 1310 performs various functional applications of the electronic device 1300, as well as data processing, by executing instructions stored in the internal memory 1321, and/or instructions stored in a memory provided in the processor.
The electronic device 1300 may implement audio functions through an audio module 1370, a speaker 1370A, a receiver 1370B, a microphone 1370C, an earphone interface 1370D, an application processor, and the like. Such as music playing, recording, etc.
The audio module 1370 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The audio module 1370 may also be used to encode and decode audio signals. In some embodiments, the audio module 1370 may be provided in the processor 1310, or a part of functional modules of the audio module 1370 may be provided in the processor 1310.
Speakers 1370A, also known as "horns," are used to convert audio electrical signals into sound signals. The electronic device 1300 may listen to music, or to hands-free conversations, through the speaker 1370A.
Receiver 1370B, also referred to as a "receiver," converts an audio electrical signal into a sound signal. When electronic device 1300 is answering a phone call or voice message, voice can be received by placing receiver 1370B close to the human ear.
A microphone 1370C, also called a "microphone" or "microphone", is used to convert a sound signal into an electrical signal. When making a call or transmitting voice information, the user can sound near the microphone 1370C through the mouth, inputting a sound signal to the microphone 1370C. The electronic device 1300 may be provided with at least one microphone 1370C. In other embodiments, the electronic device 1300 may be provided with two microphones 1370C, and may implement a noise reduction function in addition to collecting sound signals. In other embodiments, the electronic device 1300 may also be provided with three, four, or more microphones 1370C to enable collection of sound signals, noise reduction, identification of sound sources, directional recording, etc.
The earphone interface 1370D is used to connect a wired earphone. Headset interface 1370D may be USB interface 1330 or a 3.5mm open mobile electronic device platform (open mobile terminal platform, OMTP) standard interface, american cellular telecommunications industry Association (cellular telecommunications industry association of the USA, CTIA) standard interface.
The pressure sensor 1380A is configured to sense a pressure signal and convert the pressure signal into an electrical signal. In some embodiments, pressure sensor 1380A may be disposed on display 1394. The pressure sensor 1380A is of various types, such as a resistive pressure sensor, an inductive pressure sensor, a capacitive pressure sensor, and the like. The capacitive pressure sensor may be a capacitive pressure sensor comprising at least two parallel plates with conductive material. When a force is applied to the pressure sensor 1380A, the capacitance between the electrodes changes. The electronics 1300 determine the strength of the pressure from the change in capacitance. When a touch operation is applied to the display screen 1394, the electronic apparatus 1300 detects the touch operation intensity from the pressure sensor 1380A. The electronic device 1300 may also calculate the location of the touch based on the detection signal of the pressure sensor 1380A. In some embodiments, touch operations that act on the same touch location, but at different touch operation strengths, may correspond to different operation instructions. For example: and executing an instruction for checking the short message when the touch operation with the touch operation intensity smaller than the first pressure threshold acts on the short message application icon. And executing an instruction for newly creating the short message when the touch operation with the touch operation intensity being greater than or equal to the first pressure threshold acts on the short message application icon.
The gyro sensor 1380B may be used to determine a motion gesture of the electronic device 1300. In some embodiments, the angular velocity of electronic device 100 about three axes (i.e., x, y, and z axes) may be determined by gyro sensor 1380B. The gyro sensor 1380B may be used for photographing anti-shake. For example, when the shutter is pressed, the gyro sensor 1380B detects the shake angle of the electronic apparatus 1300, calculates the distance to be compensated by the lens module according to the angle, and makes the lens counteract the shake of the electronic apparatus 1300 by the reverse motion, thereby realizing anti-shake. The gyro sensor 1380B may also be used to navigate, somatosensory game scenes.
The air pressure sensor 1380C is used to measure air pressure. In some embodiments, the electronic device 1300 calculates altitude from barometric pressure values measured by the barometric pressure sensor 1380C, aiding in positioning and navigation.
The magnetic sensor 1380D includes a hall sensor. The electronic device 1300 may detect the opening and closing of the flip holster using the magnetic sensor 1380D. In some embodiments, when the electronic device 1300 is a flip machine, the electronic device 1300 may detect the opening and closing of the flip according to the magnetic sensor 1380D. And then according to the detected opening and closing state of the leather sheath or the opening and closing state of the flip, the characteristics of automatic unlocking of the flip and the like are set.
The acceleration sensor 1380E may detect the magnitude of acceleration of the electronic device 1300 in various directions (typically three axes). The magnitude and direction of gravity may be detected when the electronic device 1300 is stationary. The electronic equipment gesture recognition method can also be used for recognizing the gesture of the electronic equipment, and is applied to horizontal and vertical screen switching, pedometers and other applications.
A distance sensor 1380F for measuring distance. The electronic device 1300 may measure the distance by infrared or laser. In some embodiments, the electronic device 1300 may range using the distance sensor 1380F to achieve fast focus.
The proximity light sensor 1380G may include, for example, a Light Emitting Diode (LED) and a light detector, such as a photodiode. The light emitting diode may be an infrared light emitting diode. The electronic device 1300 emits infrared light outward through the light emitting diode. The electronic device 1300 detects infrared reflected light from nearby objects using a photodiode. When sufficient reflected light is detected, it may be determined that an object is in the vicinity of the electronic device 1300. When insufficient reflected light is detected, the electronic device 1300 may determine that there is no object in the vicinity of the electronic device 1300. The electronic device 1300 may detect that the user holds the electronic device 1300 near the ear to talk using the proximity light sensor 1380G, so as to automatically extinguish the screen for power saving purposes. The proximity light sensor 1380G may also be used in holster mode, pocket mode to automatically unlock and lock the screen.
The ambient light sensor 1380L is used to sense ambient light levels. The electronic device 1300 can adaptively adjust the display 1394 brightness based on perceived ambient light levels. The ambient light sensor 1380L may also be used to automatically adjust white balance during photographing. The ambient light sensor 1380L may also cooperate with the proximity light sensor 1380G to detect if the electronic device 1300 is in a pocket to prevent false touches.
The fingerprint sensor 1380H is used to collect a fingerprint. The electronic device 1300 may utilize the collected fingerprint characteristics to unlock the fingerprint, access the application lock, photograph the fingerprint, answer the incoming call, etc.
The temperature sensor 1380J is used to detect temperature. In some embodiments, the electronic device 1300 performs a temperature processing strategy using the temperature detected by the temperature sensor 1380J. For example, when the temperature reported by temperature sensor 1380J exceeds a threshold, electronic device 1300 performs a reduction in performance of a processor located near temperature sensor 1380J in order to reduce power consumption to implement thermal protection. In other embodiments, when the temperature is below another threshold, the electronic device 1300 heats the battery 1342 to avoid the electronic device 1300 from shutting down abnormally due to low temperatures. In other embodiments, when the temperature is below a further threshold, the electronic device 1300 performs boosting of the output voltage of the battery 1342 to avoid abnormal shutdown caused by low temperatures.
The touch sensor 1380K is also referred to as a "touch device". The touch sensor 1380K may be disposed on the display screen 1394, and the touch sensor 1380K and the display screen 1394 form a touch screen, which is also referred to as a "touch screen". The touch sensor 1380K is used to detect a touch operation acting on or near it. The touch sensor may communicate the detected touch operation to the application processor to determine the touch event type. Visual output related to the touch operation can be provided through the display screen 1394. In other embodiments, touch sensor 1380K may also be disposed on a surface of electronic device 1300 other than where display 1394 is located.
The bone conduction sensor 1380M may acquire a vibration signal. In some embodiments, bone conduction sensor 1380M may acquire a vibration signal of a human vocal tract vibrating bone piece. The bone conduction sensor 1380M may also contact the pulse of a human body to receive a blood pressure pulsation signal. In some embodiments, bone conduction sensor 1380M may also be provided in the headset in combination with the osteogenic headset. The audio module 1370 may analyze the voice signal based on the vibration signal of the sound portion vibration bone block obtained by the bone conduction sensor 1380M, so as to implement a voice function. The application processor can analyze heart rate information based on the blood pressure beat signals acquired by the bone conduction sensor 1380M, so that a heart rate detection function is realized.
Key 1390 includes a power on key, a volume key, etc. Key 1390 may be a mechanical key. Or may be a touch key. The electronic device 1300 may receive key inputs, generate key signal inputs related to user settings and function control of the electronic device 1300.
Motor 1391 may generate a vibration alert. The motor 1391 may be used for incoming call vibration alerting as well as for touch vibration feedback. For example, touch operations acting on different applications (e.g., photographing, audio playing, etc.) may correspond to different vibration feedback effects. The motor 1391 may also correspond to different vibration feedback effects by touching different areas of the display screen 1394. Different application scenarios (such as time reminding, receiving information, alarm clock, game, etc.) can also correspond to different vibration feedback effects. The touch vibration feedback effect may also support customization.
The indicator 1392 may be an indicator light, which may be used to indicate a state of charge, a change in charge, a message indicating a missed call, a notification, etc.
The SIM card interface 1395 is used to connect a SIM card. The SIM card may be inserted into the SIM card interface 1395 or removed from the SIM card interface 1395 to enable contact and separation with the electronic device 1300. The electronic device 1300 may support 1 or N SIM card interfaces, N being a positive integer greater than 1. The SIM card interface 1395 may support Nano SIM cards, micro SIM cards, and the like. The same SIM card interface 1395 can be used to insert multiple cards at the same time. The types of the plurality of cards may be the same or different. SIM card interface 1395 may also be compatible with different types of SIM cards. SIM card interface 1395 may also be compatible with external memory cards. The electronic device 1300 interacts with the network through the SIM card to realize functions such as talking and data communication. In some embodiments, the electronic device 1300 employs esims, i.e.: an embedded SIM card. The eSIM card can be embedded in the electronic device 1300 and cannot be separated from the electronic device 1300.
It should be appreciated that the electronic device 1300 shown in fig. 13 is capable of implementing the various processes of the methods provided by the embodiments shown in fig. 2A-11 of the present application. The operations and/or functions of the respective modules in the electronic device 1300 are respectively for implementing the respective flows in the above-described method embodiments. Reference is specifically made to the description in the method embodiments shown in fig. 2A to 11 of the present application, and detailed descriptions are omitted here as appropriate to avoid repetition.
The present application also provides an electronic device, where the device includes a storage medium and a central processing unit, where the storage medium may be a nonvolatile storage medium, where a computer executable program is stored in the storage medium, and where the central processing unit is connected to the nonvolatile storage medium and executes the computer executable program to implement a method provided by an embodiment shown in fig. 2A to 11 of the present application.
Embodiments of the present application also provide a computer-readable storage medium having a computer program stored therein, which when run on a computer, causes the computer to perform the methods provided by the embodiments shown in fig. 2A to 11 of the present application.
Embodiments of the present application also provide a computer program product comprising a computer program which, when run on a computer, causes the computer to perform the methods provided by the embodiments of fig. 2A-11 of the present application.
In the embodiments of the present application, "at least one" means one or more, and "a plurality" means two or more. "and/or", describes an association relation of association objects, and indicates that there may be three kinds of relations, for example, a and/or B, and may indicate that a alone exists, a and B together, and B alone exists. Wherein A, B may be singular or plural. The character "/" generally indicates that the context-dependent object is an "or" relationship. "at least one of the following" and the like means any combination of these items, including any combination of single or plural items. For example, at least one of a, b and c may represent: a, b, c, a and b, a and c, b and c or a and b and c, wherein a, b and c can be single or multiple.
Those of ordinary skill in the art will appreciate that the various elements and algorithm steps described in the embodiments disclosed herein can be implemented as a combination of electronic hardware, computer software, and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, and are not repeated herein.
In several embodiments provided herein, any of the functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a usb disk, a removable hard disk, a Read-Only Memory (hereinafter referred to as ROM), a random access Memory (Random Access Memory) and various media capable of storing program codes such as a magnetic disk or an optical disk.
The foregoing is merely specific embodiments of the present application, and any person skilled in the art may easily conceive of changes or substitutions within the technical scope of the present application, which should be covered by the protection scope of the present application. The protection scope of the present application shall be subject to the protection scope of the claims.

Claims (19)

1. A control method of screen state, applied to an electronic device, characterized by comprising:
the method comprises the steps that the opening and closing states of a foldable touch screen start to change, operation of a user is detected in a preset sensing area, the touch screen is set to be in a disabled state, and the disabled state is that the touch screen does not respond to the touch operation of the user;
when the effective time of the operation is over and the change of the opening and closing state is not terminated, or the effective time of the operation is not over and the change of the opening and closing state is terminated, or the effective time of the operation is over and the change of the opening and closing state is terminated, ending the disabled state of the touch screen, wherein the effective time of the operation is the duration of the disabled state preset for the operation;
the preset sensing area comprises: an entity sensing area, and/or a virtual sensing area; the physical sensing area is an entity area which is arranged outside the touch screen and can detect the operation of the user, the virtual sensing area is a partial area on the touch screen, and the physical sensing area comprises: a push switch, or a button, or a tactile area.
2. The method of claim 1, wherein the predetermined sensing region comprises: and the virtual sensing area further comprises after the opening and closing state of the foldable touch screen starts to change:
acquiring a preset virtual sensing area, and acquiring a display scene on the touch screen when the opening and closing state of the touch screen starts to change;
correspondingly, the operation of detecting the user in the preset sensing area comprises the following steps:
selecting a first virtual sensing area which is not overlapped with a User Interface (UI) control in the display scene from the acquired virtual sensing areas;
an operation of the user is detected in the first virtual sensing region.
3. The method according to claim 1 or 2, wherein the presetting method of the virtual sensing area comprises:
displaying a drawing interface to the user;
and acquiring an area drawn in the drawing interface by the user, and setting the area as the virtual sensing area.
4. The method of claim 3, wherein the presenting the user with a drawing interface comprises:
sampling touch operation of the user on the touch screen in the process of changing the opening and closing state of the touch screen;
Forming a thermodynamic diagram or a regional diagram according to the sampled touch operation;
the thermodynamic diagram or regional diagram is presented to the user when the drawing interface is presented to the user.
5. The method according to claim 1 or 2, wherein the presetting method of the virtual sensing area comprises:
dividing the touch screen into a plurality of subareas;
sampling touch operation of the user on the touch screen in the change process of the opening and closing state of the touch screen, and counting the number of times that each subarea obtained by dividing is touched by the touch operation respectively;
and selecting a preset number of sub-areas as the virtual sensing areas according to the number of times each sub-area is touched and the order of the times from high to low.
6. The method of claim 1, wherein the predetermined sensing region comprises: and the virtual sensing area further comprises after the opening and closing state of the foldable touch screen starts to change:
acquiring a display scene on the touch screen when the opening and closing state of the touch screen starts to change, and acquiring a virtual sensing area corresponding to the display scene; wherein, the virtual sensing area corresponding to the display scene is preset;
Correspondingly, the operation of detecting the user in the preset sensing area comprises the following steps:
and detecting the operation of the user in the virtual sensing area corresponding to the display scene.
7. The method of claim 1, 2 or 6, further comprising, after the setting the touch screen to the disabled state:
and displaying a preset User Interface (UI) effect on the touch screen, wherein the preset UI effect is used for prompting the user that the touch screen is in a non-enabled state.
8. The method of claim 7, wherein after the setting the touch screen to the disabled state, further comprising:
and in a preset time period after the disabled state of the touch screen is ended, the UI effect is kept to be displayed on the touch screen.
9. The method of claim 1, 2 or 6, further comprising:
and in the process of changing the opening and closing state, if the opening and closing state of the touch screen is detected to change in the opposite direction to the current changing direction, ending the process triggered by the user in the current changing process.
10. An electronic device, comprising:
a touch screen; one or more processors; a memory; and one or more computer programs, wherein the one or more computer programs are stored in the memory, the one or more computer programs comprising instructions, which when executed by the device, cause the device to perform the steps of:
The method comprises the steps that the opening and closing states of a foldable touch screen start to change, operation of a user is detected in a preset sensing area, the touch screen is set to be in a disabled state, and the disabled state is that the touch screen does not respond to the touch operation of the user;
when the effective time of the operation is over and the change of the opening and closing state is not terminated, or the effective time of the operation is not over and the change of the opening and closing state is terminated, or the effective time of the operation is over and the change of the opening and closing state is terminated, ending the disabled state of the touch screen, wherein the effective time of the operation is the duration of the disabled state preset for the operation;
the preset sensing area comprises: an entity sensing area, and/or a virtual sensing area; the physical sensing area is an entity area which is arranged outside the touch screen and can detect the operation of the user, the virtual sensing area is a partial area on the touch screen, and the physical sensing area comprises: a push switch, or a button, or a tactile area.
11. The electronic device of claim 10, wherein the preset sensing region comprises: and a virtual sensing area, wherein when the instruction is executed by the device, the device executes the following steps after the step that the opening and closing state of the foldable touch screen starts to change:
Acquiring a preset virtual sensing area, and acquiring a display scene on the touch screen when the opening and closing state of the touch screen starts to change;
accordingly, the instructions, when executed by the device, cause the device to perform the step of detecting the user operation in the preset sensing area, include:
selecting a first virtual sensing area which is not overlapped with a User Interface (UI) control in the display scene from the acquired virtual sensing areas;
an operation of the user is detected in the first virtual sensing region.
12. The electronic device of claim 10 or 11, wherein the instructions, when executed by the device, cause the device to perform a method of presetting a virtual sensing area, comprising the steps of:
displaying a drawing interface to the user;
and acquiring an area drawn in the drawing interface by the user, and setting the area as the virtual sensing area.
13. The electronic device of claim 12, wherein the instructions, when executed by the device, cause the device to perform the step of presenting a drawing interface to the user comprises:
sampling touch operation of the user on the touch screen in the process of changing the opening and closing state of the touch screen;
Forming a thermodynamic diagram or a regional diagram according to the sampled touch operation;
the thermodynamic diagram or regional diagram is presented to the user when the drawing interface is presented to the user.
14. The electronic device of claim 10 or 11, wherein the instructions, when executed by the device, cause the device to perform a method of presetting a virtual sensing area, comprising the steps of:
dividing the touch screen into a plurality of subareas;
sampling touch operation of the user on the touch screen in the change process of the opening and closing state of the touch screen, and counting the number of times that each subarea obtained by dividing is touched by the touch operation respectively;
and selecting a preset number of sub-areas as the virtual sensing areas according to the number of times each sub-area is touched and the order of the times from high to low.
15. The electronic device of claim 10, wherein the preset sensing region comprises: and a virtual sensing area, wherein when the instruction is executed by the device, the device executes the following steps after the step that the opening and closing state of the foldable touch screen starts to change:
Acquiring a display scene on the touch screen when the opening and closing state of the touch screen starts to change, and acquiring a virtual sensing area corresponding to the display scene; wherein, the virtual sensing area corresponding to the display scene is preset;
accordingly, the instructions, when executed by the device, cause the device to perform the step of detecting the user operation in the preset sensing area, include:
and detecting the operation of the user in the virtual sensing area corresponding to the display scene.
16. The electronic device of any one of claims 10, 11, or 15, wherein the instructions, when executed by the device, cause the device to perform the step of setting the touch screen to a disabled state, further comprise:
and displaying a preset User Interface (UI) effect on the touch screen, wherein the preset UI effect is used for prompting the user that the touch screen is in a non-enabled state.
17. The electronic device of claim 16, wherein the instructions, when executed by the device, cause the device to perform the step of setting the touch screen to a disabled state, further comprise:
And in a preset time period after the disabled state of the touch screen is ended, the UI effect is kept to be displayed on the touch screen.
18. The electronic device of any of claims 10, 11, or 15, wherein the instructions, when executed by the device, cause the device to further perform the steps of:
and in the process of changing the opening and closing state, if the opening and closing state of the touch screen is detected to change in the opposite direction to the current changing direction, ending the process triggered by the user in the current changing process.
19. A computer readable storage medium, characterized in that the computer readable storage medium has stored therein a computer program which, when run on a computer, causes the computer to perform the method according to any of claims 1-9.
CN202010379242.1A 2020-05-07 2020-05-07 Screen state control method and electronic equipment Active CN113625865B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010379242.1A CN113625865B (en) 2020-05-07 2020-05-07 Screen state control method and electronic equipment
PCT/CN2021/085662 WO2021223560A1 (en) 2020-05-07 2021-04-06 Screen state control method and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010379242.1A CN113625865B (en) 2020-05-07 2020-05-07 Screen state control method and electronic equipment

Publications (2)

Publication Number Publication Date
CN113625865A CN113625865A (en) 2021-11-09
CN113625865B true CN113625865B (en) 2023-06-06

Family

ID=78376944

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010379242.1A Active CN113625865B (en) 2020-05-07 2020-05-07 Screen state control method and electronic equipment

Country Status (2)

Country Link
CN (1) CN113625865B (en)
WO (1) WO2021223560A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116301424B (en) * 2023-03-02 2023-10-31 瑞态常州高分子科技有限公司 Touch recognition system based on pressure touch sensor

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104007911A (en) * 2013-02-25 2014-08-27 三星电子株式会社 Electronic apparatus, method of controlling the same, and computer-readable recording medium
CN107636573A (en) * 2016-07-27 2018-01-26 深圳市柔宇科技有限公司 Prevent display interface control method, device and the terminal of maloperation
CN109710111A (en) * 2018-12-30 2019-05-03 联想(北京)有限公司 A kind of false-touch prevention method and electronic equipment
CN109871147A (en) * 2019-02-22 2019-06-11 华为技术有限公司 A kind of response method and electronic equipment of touch screen

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5351006B2 (en) * 2009-12-24 2013-11-27 京セラ株式会社 Portable terminal and display control program
CN105700709B (en) * 2016-02-25 2019-03-01 努比亚技术有限公司 A kind of mobile terminal and control mobile terminal can not touch area method
CN106527799B (en) * 2016-10-26 2020-01-24 南通奥拓自控设备有限公司 Method and device for preventing key from being misoperated
CN106527818B (en) * 2016-12-16 2019-07-02 Oppo广东移动通信有限公司 Control method, device and the mobile terminal of touch operation on a kind of mobile terminal
CN110658936A (en) * 2018-06-29 2020-01-07 中兴通讯股份有限公司 Edge suppression area control method and device, mobile terminal and storage medium
CN109840061A (en) * 2019-01-31 2019-06-04 华为技术有限公司 The method and electronic equipment that control screen is shown
CN109992189B (en) * 2019-02-22 2021-05-11 华为技术有限公司 Screen control method, electronic device and storage medium
WO2020257979A1 (en) * 2019-06-24 2020-12-30 深圳市柔宇科技有限公司 Method for preventing accidental touch, electronic apparatus, and non-volatile computer-readable storage medium
CN110837318B (en) * 2019-10-29 2023-09-19 捷开通讯(深圳)有限公司 Anti-false touch method and device for folding screen of mobile terminal and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104007911A (en) * 2013-02-25 2014-08-27 三星电子株式会社 Electronic apparatus, method of controlling the same, and computer-readable recording medium
CN107636573A (en) * 2016-07-27 2018-01-26 深圳市柔宇科技有限公司 Prevent display interface control method, device and the terminal of maloperation
CN109710111A (en) * 2018-12-30 2019-05-03 联想(北京)有限公司 A kind of false-touch prevention method and electronic equipment
CN109871147A (en) * 2019-02-22 2019-06-11 华为技术有限公司 A kind of response method and electronic equipment of touch screen

Also Published As

Publication number Publication date
CN113625865A (en) 2021-11-09
WO2021223560A1 (en) 2021-11-11

Similar Documents

Publication Publication Date Title
CN110445978B (en) Shooting method and equipment
CN110119295B (en) Display control method and related device
CN110536004B (en) Method for applying multiple sensors to electronic equipment with flexible screen and electronic equipment
CN111443884A (en) Screen projection method and device and electronic equipment
CN110798552A (en) Volume adjusting method and electronic equipment
CN110798568B (en) Display control method of electronic equipment with folding screen and electronic equipment
CN112751954B (en) Operation prompting method and electronic equipment
CN111124201A (en) One-hand operation method and electronic equipment
CN110658975B (en) Mobile terminal control method and device
EP3879401A1 (en) Automatic screen-splitting method, graphical user interface, and electronic device
WO2021208723A1 (en) Full-screen display method and apparatus, and electronic device
CN115914460A (en) Display screen control method and electronic equipment
CN114500901A (en) Double-scene video recording method and device and electronic equipment
CN112272191B (en) Data transfer method and related device
CN113625865B (en) Screen state control method and electronic equipment
CN112334860A (en) Touch method of wearable device, wearable device and system
CN114008579A (en) User-defined key method and device of folding device and storage medium
CN114095602A (en) Index display method, electronic device and computer-readable storage medium
CN112527220B (en) Electronic equipment display method and electronic equipment
CN115393676A (en) Gesture control optimization method and device, terminal and storage medium
WO2023071497A1 (en) Photographing parameter adjusting method, electronic device, and storage medium
CN114125144B (en) Method, terminal and storage medium for preventing false touch
CN114610195B (en) Icon display method, electronic device and storage medium
CN111399742B (en) Interface switching method and device and electronic equipment
CN116560768A (en) Interface display method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant