CN112783406B - Operation execution method and device and electronic equipment - Google Patents

Operation execution method and device and electronic equipment Download PDF

Info

Publication number
CN112783406B
CN112783406B CN202110106657.6A CN202110106657A CN112783406B CN 112783406 B CN112783406 B CN 112783406B CN 202110106657 A CN202110106657 A CN 202110106657A CN 112783406 B CN112783406 B CN 112783406B
Authority
CN
China
Prior art keywords
target
split screen
input
position information
split
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110106657.6A
Other languages
Chinese (zh)
Other versions
CN112783406A (en
Inventor
潘宣宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202110106657.6A priority Critical patent/CN112783406B/en
Publication of CN112783406A publication Critical patent/CN112783406A/en
Application granted granted Critical
Publication of CN112783406B publication Critical patent/CN112783406B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04847Interaction techniques to control parameter settings, e.g. interaction with sliders or dials
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces

Abstract

The application discloses an operation execution method and device and electronic equipment, belongs to the technical field of communication, and can solve the problem that the convenience of operation execution of the electronic equipment is poor. The operation execution method comprises the following steps: receiving a first input of a user under the condition that a display screen of the electronic equipment comprises N split screen areas; n is a positive integer greater than 1; responding to the first input, and acquiring target position information corresponding to the first input; and determining a target split screen area from the N split screen areas according to the target position information and the position information of the N split screen areas, and executing a target operation corresponding to the first input on the target split screen area. The operation execution method provided by the embodiment of the application can be applied to the process that the electronic equipment executes the operation corresponding to the gesture input according to the gesture input of the user.

Description

Operation execution method and device and electronic equipment
Technical Field
The application belongs to the technical field of communication, and particularly relates to an operation execution method and device, an electronic equipment method and device, and electronic equipment.
Background
Generally, an electronic device can quickly perform an operation corresponding to a gesture input of a user according to the gesture input to simplify the operation of the user. For example, a user may perform a screen capture gesture on a display screen of the electronic device, so that the electronic device may directly perform a screen capture operation on all display areas in the display screen according to the screen capture gesture to quickly perform the screen capture operation.
However, when the electronic device is in the split-screen display state, a situation may occur that the user wants to trigger the electronic device to perform a screen capture operation on a part of the display area of the display screen, and the electronic device may perform the screen capture operation on the whole display area according to the screen capture gesture, so that the accuracy of the electronic device performing the operation that the user wants is low.
Thus, the electronic equipment is poor in convenience of operation.
Disclosure of Invention
The embodiment of the application aims to provide an operation execution method and device and electronic equipment, and the problem that the convenience of executing operation by the electronic equipment is poor can be solved.
In order to solve the technical problem, the present application is implemented as follows:
in a first aspect, an embodiment of the present application provides an operation execution method, where the method includes: receiving a first input of a user under the condition that a display screen of the electronic equipment comprises N split screen areas; n is a positive integer greater than 1; responding to the first input, and acquiring target position information corresponding to the first input; and determining a target split screen area from the N split screen areas according to the target position information and the position information of the N split screen areas, and executing a target operation corresponding to the first input on the target split screen area.
In a second aspect, an embodiment of the present application provides an operation execution apparatus, including: the device comprises a receiving module, an obtaining module, a determining module and an executing module. The receiving module is used for receiving a first input of a user under the condition that a display screen of the operation executing device comprises N split screen areas; n is a positive integer greater than 1. And the acquisition module is used for responding to the first input received by the receiving module and acquiring the target position information corresponding to the first input. And the determining module is used for determining the target split screen area from the N split screen areas according to the target position information acquired by the acquiring module and the position information of the N split screen areas. And the execution module is used for executing the target operation corresponding to the first input on the target screen splitting area determined by the determination module.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a processor, a memory, and a program or instructions stored in the memory and executable on the processor, and when executed by the processor, the program or instructions implement the steps of the method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the method according to the first aspect.
In this embodiment of the application, under the condition that the display screen of the electronic device includes N split screen regions, the electronic device may obtain, according to a first input of a user, target position information corresponding to the first input, and determine, according to the target position information and position information of the N split screen regions, a target split screen region from the N split screen regions, so as to perform an operation corresponding to the first input on the target split screen region. Because when electronic equipment is in the split screen display state, electronic equipment can acquire the positional information that this gesture input corresponds according to user's gesture input, and according to the positional information that this gesture input corresponds and the positional information in N split screen regions, from N split screen regions, determine the split screen region of user's demand, and, carry out the operation that this gesture input corresponds to the split screen region of user's demand, and not carry out the operation that this gesture input corresponds to all display regions of display screen, consequently, can promote the accuracy that electronic equipment carries out the operation that the user wanted, so, can promote the convenience that electronic equipment carries out the operation.
Drawings
FIG. 1 is a schematic diagram of an operation execution method provided by an embodiment of the present application;
fig. 2 is one of schematic diagrams of an example of an interface of a mobile phone according to an embodiment of the present disclosure;
FIG. 3 is a second schematic diagram of an operation execution method according to an embodiment of the present application;
fig. 4 is a second schematic diagram of an example of an interface of a mobile phone according to the second embodiment of the present disclosure;
FIG. 5 is a third schematic diagram of an operation execution method according to an embodiment of the present application;
fig. 6 is a third schematic diagram of an example of an interface of a mobile phone according to an embodiment of the present application;
FIG. 7 is a fourth schematic diagram illustrating an operation execution method according to an embodiment of the present application;
FIG. 8 is a schematic structural diagram of an operation execution device according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of an electronic device provided in an embodiment of the present application;
fig. 10 is a hardware schematic diagram of an electronic device provided in an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application without making any creative effort belong to the protection scope of the present application.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application are capable of operation in sequences other than those illustrated or described herein, and that the terms "first," "second," etc. are generally used in a generic sense and do not limit the number of terms, e.g., a first term can be one or more than one. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The following describes in detail an operation execution method provided by the embodiments of the present application through specific embodiments and application scenarios thereof with reference to the accompanying drawings.
The operation execution method provided by the embodiment of the application can be applied to a scene that a user triggers the electronic equipment to execute the screen capture operation, and can also be applied to a scene that the user triggers the electronic equipment to display an application interface.
For a scene in which a user triggers an electronic device to perform a screen capture operation, assuming that the user wants to trigger the electronic device to perform the screen capture operation through a gesture input when the electronic device is in a split-screen display state, the user may perform a sliding input (for example, an input trajectory of the sliding input is a circle) on a display screen of the electronic device, so that the electronic device may determine an operation (i.e., a screen capture operation) corresponding to the input trajectory according to the input trajectory of the sliding input, and perform the screen capture operation on all split-screen areas in the display screen of the electronic device. However, since the user may want to trigger the electronic device to perform the screen capturing operation on the partial screen area, and the electronic device may perform the screen capturing operation on the entire partial screen area according to the sliding input, the accuracy of the electronic device performing the operation intended by the user is low.
However, in the embodiment of the present application, the electronic device may obtain the position information corresponding to the sliding input according to the sliding input of the user, determine the screen splitting area required by the user from all the screen splitting areas according to the position information corresponding to the sliding input and the position information of all the screen splitting areas, and perform the screen capturing operation on the screen splitting area required by the user, so that the accuracy of the electronic device in performing the operation intended by the user may be improved.
For a scenario in which a user triggers the electronic device to display an application interface, assuming that when the electronic device is in a split-screen display state, the user wants to trigger the electronic device to perform an operation of displaying a preset application interface (for example, the application interface 1) through a gesture input, the user may perform a sliding input (for example, an input trajectory of the sliding input is a rectangle) on the display screen of the electronic device, so that the electronic device may determine an operation (that is, an operation of displaying the application interface 1) corresponding to the input trajectory according to the input trajectory of the sliding input, and display the application interface 1 in each split-screen area in the display screen of the electronic device. However, since the user may want to trigger the electronic device to display the application interface 1 in a partial split screen area, and the electronic device may display the application interface 1 in each split screen area according to the sliding input, the accuracy of the electronic device performing the operation intended by the user is low.
However, in the embodiment of the application, the electronic device may determine the split screen area required by the user according to the position information corresponding to the sliding input of the user and the position information of all the split screen areas, and display the application interface 1 in the split screen area required by the user, so that the accuracy of the electronic device in executing the operation desired by the user may be improved.
Fig. 1 is a flowchart illustrating an operation execution method according to an embodiment of the present application. As shown in fig. 1, the operation execution method provided by the embodiment of the present application may include steps 101 to 103 described below.
Step 101, in the case that the display screen of the operation execution device includes N split screen areas, the operation execution device receives a first input of a user.
In the embodiment of the application, N is a positive integer greater than 1.
Optionally, in this embodiment of the application, when the user uses the operation execution device, the user may perform a screen splitting gesture on the display screen of the operation execution device, so that the operation execution device may perform a screen splitting operation corresponding to the screen splitting gesture according to the screen splitting gesture, that is, divide the display area of the display screen into N screen splitting areas, so that the user may perform a first input on the operation execution device.
In an embodiment of the present application, the first input is used to trigger the operation executing device to execute an operation.
Optionally, in this embodiment of the application, the first input may specifically be: a slide input by the user on the display screen, or a press input by the user on the display screen (e.g., a long press input), or a press input by the user on the back cover of the operation performing device (e.g., a double-click input), or the like. Of course, the first input may also be other types of inputs, which are not described herein in detail in this embodiment of the application.
And 102, responding to the first input by the operation execution device, and acquiring target position information corresponding to the first input.
Optionally, in this embodiment of the present application, while the target position information is obtained, the operation execution device may further obtain position information of M dividing lines of the N split screen areas.
Optionally, in this embodiment of the present application, M is a positive integer greater than 1, and M < N.
Optionally, in this embodiment of the application, in response to the first input, the operation execution device acquires target position information corresponding to the first input and acquires position information of M dividing lines of the N split-screen areas when an operation (for example, a target operation in an embodiment described below) corresponding to the first input satisfies a first preset condition.
Further optionally, in this embodiment of the application, the first preset condition may be: the operation corresponding to the first input is not a first operation, namely a global unified operation, and the first operation may include at least one of the following: the method comprises the following steps of starting a camera, starting a flashlight, performing screen splitting, quitting screen splitting, and the like, wherein the first operation can be: the operation execution means executes an operation set in advance.
It should be noted that the above "global unified operation" may be understood as: the operation executing means cannot execute the operation for the partial screen area.
It can be understood that if the operation corresponding to the first input is not the first operation, it may be considered that the user may want to trigger the operation execution device to execute the operation on the partial screen area, and therefore, the operation execution device may acquire the target position information corresponding to the first input and acquire the position information of the M boundaries of the N partial screen areas.
Optionally, in this embodiment of the application, the target location information may specifically be: coordinate information of the input position of the first input in the preset coordinate system, and the target position information may include: a location information or a plurality of location information.
It should be noted that the "preset coordinate system" may be: the operation executing device is a rectangular plane coordinate system established by taking an intersection point of two edge lines of the display screen as an origin, one edge line (for example, a lower edge line) of the two edge lines as an X-axis, and the other edge line (for example, a left edge line) of the two edge lines as a Y-axis.
Further optionally, in this embodiment of the application, in a case that the first input is a press input, the target location information includes one piece of location information.
Further optionally, in this embodiment of the application, in a case that the first input is a press input of the user on the display screen, the operation performing device may detect an input position of the press input to acquire position information of the input position to acquire the target position information.
Further optionally, in this embodiment of the application, in a case that the first input is a pressing input of the user on the rear cover of the operation execution device, the operation execution device may detect at least two sensors of the operation execution device to determine a sensor corresponding to the pressing input, so that the operation execution device may determine, according to position information of the sensor, target position information to acquire the target position information.
Further optionally, in this embodiment of the application, in a case that the first input is a slide input, the target position information includes a plurality of position information.
Further optionally, in this embodiment of the application, the operation execution device may obtain an input trajectory of the sliding input, where the input trajectory includes at least one track point, and detect the at least one track point respectively to obtain the position information of each track point, so as to obtain the target position information.
Optionally, in this embodiment of the present application, each of the M dividing lines may be: the operation execution device displays a boundary in a preset display area of the display screen according to the number of the split screen areas; alternatively, the operation execution means displays a boundary in a display area of the display screen corresponding to the input in accordance with the input of the user.
Optionally, in this embodiment of the application, the position information of each of the M dividing lines may specifically be coordinate information of each dividing line, where the coordinate information of each dividing line is: determined from the coordinate information of the two intersections of each of the dividing lines with the edge lines of the display screen.
Optionally, in this embodiment of the application, the operation performing device may detect two intersection points of each boundary line respectively to obtain coordinate information of the two intersection points of each boundary line, and determine the coordinate information of each boundary line according to the coordinate information of the two intersection points of each boundary line respectively to obtain position information of M boundary lines.
For example, suppose that the intersection point of the boundary line 1 and the edge line 1 of the display screen is the intersection point 1, and the coordinate information of the intersection point 1 is (X) 1 ,Y 1 ) The intersection of the boundary line 1 and the edge line 2 of the display screen is an intersection 2, and the coordinate information of the intersection 2 is (X) 2 ,Y 1 ) Then the operation executing means may execute the operation based on the coordinate information (X) of the intersection 1 1 ,Y 1 ) And coordinate information of the intersection point 2 is (X) 2 ,Y 1 ) Coordinate information of the boundary line 1, i.e., Y = Y, is determined 1
And 103, the operation execution device determines a target screen splitting area from the N screen splitting areas according to the target position information and the position information of the N screen splitting areas, and executes a target operation corresponding to the first input for the target screen splitting area.
Optionally, in this embodiment of the application, the operation executing apparatus may determine, according to the position information of the M dividing lines, the position information of the N split-screen areas, and then determine, according to the target position information and the position information (for example, a position range in an embodiment described below) of the N split-screen areas, a target split-screen area corresponding to the target position information.
Optionally, in this embodiment of the application, the operation executing device may determine the position ranges of the N split screen areas according to the position information of the M dividing lines, and then determine the target split screen area according to the target position information and the position ranges of the N split screen areas.
Further optionally, in this embodiment of the application, for each of the M dividing lines, the operation performing device may determine a position range of one split-screen area according to position information of the dividing line and position information of a target object to determine position ranges of the N split-screen areas, where the target object includes at least one of the following: a boundary line adjacent to the one boundary line, and an edge line of the display screen adjacent to the one boundary line.
Specifically, for each of the M dividing lines, the operation performing means may determine the position range of one split screen region based on one coordinate value (i.e., an abscissa value or an ordinate value) in the coordinate information of the one dividing line and one coordinate value (i.e., an abscissa value or an ordinate value) in the coordinate information of the target object, to determine the position ranges of the N split screen regions.
It is understood that the position range of each split screen region may include two coordinate value ranges, i.e., an abscissa value range, and an ordinate value range, respectively.
Further alternatively, in the embodiment of the present application, the operation execution means may determine the position range of the first split screen area based on the coordinate information of the first one of the M division lines and the edge line of the display screen adjacent to the first one of the M division lines, determine the position range of the second split screen area based on the coordinate information of the second one of the M division lines, the coordinate information of the division line adjacent to the second one of the M division lines (i.e., the first division line), and the edge line of the display screen adjacent to the second one of the M division lines, determine the position range of the third split screen area based on the coordinate information of the third one of the M division lines, the coordinate information of the division line adjacent to the third one of the M division lines (i.e., the second division line), and the edge line of the display screen adjacent to the third one of the M division lines, and so on until the position range of the last split screen area is determined based on the coordinate information of the last one of the M division lines and the edge line of the display screen adjacent to the last division line.
For example, the operation execution device is taken as a mobile phone for description. As shown in fig. 2, the mobile phone is in the split-screen display state, the display screen 10 of the mobile phone includes four edge lines (for example, an edge line 11, an edge line 12, an edge line 13, and an edge line 14), and the coordinate information of the edge line 11 is Y = Y z The coordinate information of the edge line 12 is X = X z The edge line 13 has coordinate information of Y =0, the edge line 14 has coordinate information of X =0, the display screen 10 includes N divided screen regions (e.g., a divided screen region 15 and a divided screen region 16), the divided screen region 15 and the divided screen region 16 include M dividing lines (e.g., a dividing line 17), and the dividing line 17 has coordinate information of Y = Y 1 Then, the cellular phone can determine the position range of the divided screen region 15, i.e., the abscissa value range [0, x ] based on the boundary line 17 and the edge lines (i.e., the edge line 11, the edge line 12, and the edge line 14) of the display screen 10 adjacent to the boundary line 17 z ]Range of ordinate [ Y 1 ,Y z ]And determines a position range of the divided screen region 16, i.e., an abscissa value range [0, X ] based on the boundary line 17 and the edge lines (i.e., the edge line 12, the edge line 13, and the edge line 14) of the display screen 10 adjacent to the boundary line 17 z ]Range of ordinate values [0, Y 1 ]。
Optionally, in this embodiment of the present application, the target position information is located within a position range of the target split screen area.
Further optionally, in this embodiment of the application, when the target position information includes one piece of position information, and the target position information is coordinate information of the first input position in the preset coordinate system, the operation performing device may determine, according to the one piece of coordinate information, a split screen area where the one piece of coordinate information is located from position ranges of the N split screen areas, so as to determine the target split screen area.
Further optionally, in this embodiment of the application, when the target location information includes multiple pieces of location information, and the target location information is coordinate information of a first input location in a preset coordinate system, the operation performing device may determine, according to first coordinate information in the multiple pieces of coordinate information, a split screen area where the first coordinate information is located from position ranges of N split screen areas, and determine, according to second coordinate information in the multiple pieces of coordinate information, a split screen area where the second coordinate information is located from position ranges of N split screen areas, and so on, until a split screen area where the last coordinate information is located is determined, to determine the target split screen area.
It is understood that the operation executing device may traverse each of the plurality of location information to determine the split screen area where each of the location information is located; the split screen area where each coordinate information is located may be the same or different.
In the embodiment of the application, the operation executing device can respectively determine the abscissa value range and the ordinate value range of each split screen region according to the position information of the M dividing lines, so that the operation executing device can determine the split screen region corresponding to the gesture input of the user (i.e., the split screen region required by the user) according to the target position information and the abscissa value range and the ordinate value range of each split screen region, and execute the operation corresponding to the gesture input on the split screen region required by the user.
In the embodiment of the application, the operation execution device can determine the position ranges of the N split screen areas according to the position information of the M dividing lines, so as to accurately determine the split screen areas corresponding to the gesture input of the user (i.e., the split screen areas required by the user) according to the target position information and the position ranges of the N split screen areas, and thus, the operation execution device can accurately execute the operation on the split screen areas required by the user, and therefore, the accuracy of the operation execution device executing the operation desired by the user can be improved.
Optionally, in this embodiment of the application, the target split screen area may include: a split screen area or multiple split screen areas.
Optionally, in this embodiment of the present application, the target operation may include any one of: screen capture operation, operation of displaying a preset application interface, operation of adjusting display parameters and the like.
For example, in a case where the target operation includes a screen capture operation, the operation performing means may capture a screen of the target split screen area of the display screen according to the position information of the target split screen area to perform the screen capture operation.
For example, in a case where the target operation includes an operation of displaying a preset application interface, the operation execution device may start the preset application and display the preset application interface in the target split screen area.
For example, in a case that the target operation includes an operation of adjusting a display parameter, the operation execution device may display a parameter adjustment control in the target split-screen area, so that the user may input the parameter adjustment control to adjust the display parameter (e.g., display brightness, display contrast, etc.) of the target split-screen area.
In the embodiment of the application, when the operation execution device is in a split-screen display state, if a user wants to trigger the operation execution device to execute an operation on a part of split-screen areas in a display screen, the user can directly execute gesture input in the part of split-screen areas, so that the operation execution device can acquire position information corresponding to the gesture input according to the gesture input under the condition that the operation corresponding to the gesture input is not a global unified operation.
According to the operation execution method provided by the embodiment of the application, under the condition that the display screen of the operation execution device comprises N split screen areas, the operation execution device can acquire target position information corresponding to the first input according to the first input of a user, and determine the target split screen area from the N split screen areas according to the target position information and the position information of the N split screen areas so as to execute the operation corresponding to the first input on the target split screen area. Because when the operation execution device is in the split-screen display state, the operation execution device can acquire the position information corresponding to the gesture input according to the gesture input of the user, and determine the split-screen area required by the user from the N split-screen areas according to the position information corresponding to the gesture input and the position information of the N split-screen areas, and execute the operation corresponding to the gesture input on the split-screen area required by the user instead of executing the operation corresponding to the gesture input on all display areas of the display screen, the accuracy of the operation execution device for executing the operation desired by the user can be improved, and therefore, the convenience of the operation execution device for executing the operation can be improved.
Optionally, in this embodiment of the application, when the target operation includes an operation of displaying a preset application interface and the target split screen area includes one split screen area, if a preset application interface is already displayed in a certain split screen area of the N split screen areas, the operation execution device may replace, according to a gesture input of a user, an interface displayed in the target split screen area with the preset application interface displayed in the certain split screen area.
The following will exemplify how the operation execution device executes the target operation on the target split screen area by taking the target operation including an operation of displaying a preset application interface as an example.
Optionally, in an embodiment of the present application, the target operation includes: displaying the operation of a preset application interface; the N split screen areas also comprise a first split screen area; prior to receiving the first input from the user,
the preset application interface is displayed in the first screen division area. Specifically, as shown in fig. 3 in conjunction with fig. 1, the step 103 can be specifically realized by the step 103a described below.
Step 103a, the operation execution device determines a target screen division area from the N screen division areas according to the target position information and the position information of the N screen division areas, displays a preset application interface in the target screen division area, and displays a first interface in the first screen division area.
In an embodiment of the present application, the first interface is: and for the target split screen area, before the operation of displaying the preset application interface is executed, displaying the interface in the target split screen area.
Further optionally, in this embodiment of the application, the preset application interface may specifically be: the first application can be an application preset in the operation execution device by a user.
For example, as shown in fig. 4 (a), the mobile phone includes N split screen areas (e.g., a split screen area 18 and a split screen area 19), an interface of an application a is displayed in the split screen area 18, and a preset application interface (e.g., an interface of an application b) is displayed in the split screen area 18, so that a user can perform a first input (e.g., a sliding input 20 with an input track of a circle in the split screen area 18) on the mobile phone; as shown in fig. 4 (B), after the user performs the slide input 20, the mobile phone may determine a target split screen area (i.e., the split screen area 18) from the split screen area 18 and the split screen area 19, display an interface of the application B in the split screen area 18, and display a first interface (i.e., an interface of the application a) in the first split screen area (i.e., the split screen area 19).
In the embodiment of the application, in the process that a user uses an operation execution device, a situation that the user wants to trigger the operation execution device to display a preset application interface and the preset application interface is already displayed in a certain split screen area of the operation execution device may occur, so that the operation execution device may determine a target split screen area according to position information input by a gesture of the user and display the preset application interface in the target split screen area, so that the user can quickly view content in the preset application interface.
In the embodiment of the application, the operation execution device can display the preset application interface in the split screen area required by the user, and display the first interface in the split screen area required by the user in the first split screen area in which the preset application interface is displayed before the operation of displaying the preset application interface is executed, so that the user does not need to search the preset application interface one by one from the N split screen areas, the operation of the user can be simplified, and the flexibility of the display interface of the operation execution device can be improved.
Optionally, in this embodiment of the application, when the target operation includes an operation of displaying a preset application interface and the target split-screen area includes a plurality of split-screen areas, the operation execution device may display the preset interface in the plurality of split-screen areas.
In the following, how the operation execution device executes the target operation on the target split screen area will be described by taking the example that the target operation includes an operation of displaying a preset application interface, and the target split screen area includes all split screen areas.
Optionally, in this embodiment of the present application, the target operation includes: displaying the operation of a preset application interface; the target split screen area comprises N split screen areas. Specifically, as shown in fig. 5 in conjunction with fig. 1, the step 103 can be specifically realized by the step 103b described below.
And 103b, controlling the operation execution device to exit the split screen mode by the operation execution device, and displaying a preset application interface in the display screen.
For example, as shown in fig. 6 (a), the mobile phone includes N split screen areas (for example, a split screen area 20 and a split screen area 21), where an interface of an application c is displayed in the split screen area 20, and an interface of an application d is displayed in the split screen area 21, so that a user may perform a first input on the mobile phone (for example, a slide input 22 with a circle input track in the split screen area 20 and the split screen area 21); as shown in fig. 6 (B), after the user performs the slide input 22, the mobile phone may determine the target split screen areas (i.e., the split screen area 20 and the split screen area 21), control the mobile phone to exit the split screen mode, and display a preset application interface (e.g., an interface of application B) in the display screen.
In the embodiment of the application, in the process that the user uses the operation execution device, the situation that the user wants to trigger the operation execution device to display the preset application interface in all the split screen areas may occur, so that the operation execution device can exit the split screen mode and directly display the preset application interface in the display screen.
In the embodiment of the application, because the operation execution device can exit the split screen mode under the condition that the target split screen area comprises all the split screen areas, the preset application interface is displayed in all the display areas of the display screen, and the user does not need to perform multiple operations, the operation of the user can be simplified, and thus, the efficiency of the display interface of the operation execution device can be improved.
In the following, how the operation execution device obtains the target position information corresponding to the first input will be described by taking the first input as an example of a pressing input of the user on the rear cover of the operation execution device.
Optionally, in this embodiment of the present application, the operation executing apparatus includes: at least two sensors disposed on a first side of a back cover of the operation performing device; the first side is: close to one side of the body of the operation execution device; the first input is: user input to the rear cover. Specifically, as shown in fig. 7 in conjunction with fig. 1, the step 102 may be implemented by the following steps 102a to 102 c.
Step 102a, the operation executing device responds to the first input, and at least two sensing parameters respectively acquired by at least two sensors are acquired.
In the embodiment of the present application, each of the at least two sensors corresponds to a sensing parameter.
It is understood that at least two sensors may be disposed on the first side of the rear cover (i.e., the side close to the body of the operation performing device), so that when a user performs a pressing input to the rear cover, the at least two sensors can respectively detect the pressing input to obtain at least two sensing parameters.
Further optionally, in this embodiment of the application, each sensor of the at least two sensors may specifically be a pressure sensor.
Further optionally, in this embodiment of the present application, each of the at least two sensing parameters may include any one of the following: pressure value parameter, time parameter of detected pressure value.
Optionally, in an embodiment of the present application, the at least two sensors include: at least one first sensor and at least one second sensor; each of the at least one first sensor is disposed on a first edge line, and each of the at least one second sensor is disposed on a second edge line, the second edge line being adjacent to the first edge line.
Further alternatively, in this embodiment of the application, the first edge line may be specifically one edge line (for example, a lower edge line) corresponding to the X axis of the preset coordinate system, and the second edge line may be specifically another edge line (for example, a left edge line) corresponding to the Y axis of the preset coordinate system.
Further optionally, in this embodiment of the application, the at least one first sensor is uniformly disposed on the first edge line; the at least one second sensor is uniformly disposed on the second edge line.
And 102b, operating the executing device to determine a target sensing parameter from at least two sensing parameters.
In the embodiment of the present application, the target sensing parameters are: a maximum sensing parameter, or a minimum sensing parameter, of the at least two sensing parameters.
Further optionally, in this embodiment of the application, when each sensing parameter is a pressure value parameter, the target sensing parameter is a maximum sensing parameter of the at least two sensing parameters; or, in the case that each sensing parameter is a time parameter for detecting a pressure value, the target sensing parameter is a minimum sensing parameter of the at least two sensing parameters.
In the embodiment of the present application, under the condition that each sensing parameter is a pressure value parameter, if a pressure value parameter acquired by one sensor is larger, the position of the one sensor may be considered to be closer to an input position of a first input (i.e., a pressing input) of a user, and therefore, the operation executing device may determine, from at least two sensing parameters, a maximum sensing parameter so as to determine a sensor closest to the input position of the pressing input.
In the embodiment of the present application, when each sensing parameter is a time parameter for detecting a pressure value, if a time parameter for detecting a pressure value by one sensor is smaller, it may be considered that the position of the one sensor is closer to an input position of a first input (i.e., a pressing input) of a user, and therefore, the operation execution device may determine, from at least two sensing parameters, a minimum sensing parameter to determine a sensor closest to the input position of the pressing input.
And 102c, determining the target position information by the operation executing device according to the position information of the sensor corresponding to the target sensing parameter so as to acquire the target position information.
Optionally, in this embodiment of the present application, the target sensing parameters include: a first target sensing parameter and a second target sensing parameter; the first target sensing parameter is: the maximum sensing parameter or the minimum sensing parameter in the at least one sensing parameter respectively acquired by the at least one first sensor; the second target sensing parameter is: the at least one second sensor respectively collects the maximum sensing parameter or the minimum sensing parameter in the at least one sensing parameter.
Further optionally, in this embodiment of the application, the position information of the sensor corresponding to the target sensing parameter may specifically be: and coordinate information of the sensor corresponding to the target sensing parameter.
Further alternatively, in this embodiment of the application, the operation execution device may determine an abscissa value in the coordinate information of the sensor corresponding to the first target sensing parameter as an abscissa value of the target position information, and determine an ordinate value in the coordinate information of the sensor corresponding to the second target sensing parameter as an ordinate value of the target position information, so as to determine the target position information.
In the embodiment of the application, at least one first sensor may be disposed on the first edge line, and at least one second sensor may be disposed on the second edge line, so that the sensing parameters of different edge lines of the back cover of the operation executing device may be respectively collected by more sensors, and the first target sensing parameter and the second target sensing parameter may be determined from the sensing parameters collected by more sensors of the different edge lines, and therefore, the accuracy of determining the target position information may be improved, and thus, the accuracy of the operation executing device executing the operation desired by the user may be improved.
In the embodiment of the application, in the process that a user uses an operation execution device, the user can perform gesture input (namely press input) on a rear cover of the operation execution device, so that the operation execution device can respectively acquire sensing parameters corresponding to the gesture input through a plurality of sensors arranged on a first side of the rear cover, and determine a sensor closest to an input position of the gesture input from the sensing parameters acquired respectively, so that the operation execution device can determine position information corresponding to the gesture input according to the position information of the sensor, thus, the operation execution device can determine a screen splitting area required by the user according to the position information corresponding to the gesture input, and perform operation corresponding to the gesture input according to the screen splitting area required by the user.
In the embodiment of the application, because at least two sensors can be arranged on the first side of the back cover, in this way, the operation execution device can acquire two sensing parameters respectively acquired by the at least two sensors, determine the sensor closest to the input position of the gesture input of the user, and determine the position information corresponding to the gesture input of the user according to the position information of the sensor, and determine the screen splitting area required by the user according to the position information corresponding to the gesture input of the user, and perform the operation corresponding to the gesture input to the screen splitting area required by the user, that is, the user can perform the gesture input on the back cover, so that the operation execution device can perform the operation corresponding to the gesture input to the screen splitting area required by the user, instead of performing the gesture input only on the display screen, therefore, the situation that the fingers of the user perform the misoperation on the content in the display screen in the gesture input process of the user can be avoided, and thus, the accuracy of performing the operation desired by the user by the operation execution device can be improved.
How the operation execution device obtains the target position information corresponding to the first input will be described below by taking the first input as an example of a sliding input of the user on the display screen of the operation execution device.
Optionally, in this embodiment of the application, the operation execution device, in response to the first input, may detect an input track of the sliding input to obtain position information of at least one track point of the input track, so as to obtain the target position information.
It should be noted that, in the operation execution method provided in the embodiment of the present application, the execution main body may be the operation execution device in the foregoing embodiment, or a control module in the operation execution device for executing the operation execution method. In the embodiment of the present application, an operation execution method executed by an operation execution device is taken as an example, and the device of the operation execution method provided in the embodiment of the present application is described.
Fig. 8 shows a schematic diagram of a possible structure of the operation execution device involved in the embodiment of the present application. As shown in fig. 8, the operation performing device 60 may include: a receiving module 61, an obtaining module 62, a determining module 63 and an executing module 64.
The receiving module 61 is configured to receive a first input of a user when a display screen of the operation executing apparatus includes N split screen areas; n is a positive integer greater than 1. And an obtaining module 62, configured to obtain, in response to the first input received by the receiving module 61, target location information corresponding to the first input. And a determining module 63, configured to determine the target split-screen area from the N split-screen areas according to the target position information acquired by the acquiring module 62 and the position information of the N split-screen areas. And an executing module 64, configured to execute a target operation corresponding to the first input on the target screen splitting area determined by the determining module 63.
In one possible implementation manner, the operation execution device includes: at least two sensors disposed on a first side of a back cover of the operation performing device; the first side is: close to one side of the body of the operation execution device; the first input is: user input to the rear cover. The obtaining module 62 is specifically configured to obtain at least two sensing parameters respectively acquired by at least two sensors; each sensor corresponds to a sensing parameter. The determining module 63 is further configured to determine a target sensing parameter from the at least two sensing parameters; the target sensing parameters are: a maximum sensing parameter, or a minimum sensing parameter, of the at least two sensing parameters; and determining the target position information according to the position information of the sensor corresponding to the target sensing parameter so as to acquire the target position information.
In one possible implementation, the at least two sensors include: at least one first sensor and at least one second sensor; each first sensor is arranged on the first edge line, each second sensor is arranged on the second edge line, and the second edge line is adjacent to the first edge line; the target sensing parameters include: a first target sensing parameter and a second target sensing parameter; the first target sensing parameter is: the maximum sensing parameter or the minimum sensing parameter in the at least one sensing parameter respectively acquired by the at least one first sensor; the second target sensing parameter is: the at least one second sensor respectively collects the maximum sensing parameter or the minimum sensing parameter in the at least one sensing parameter.
In a possible implementation manner, the determining module 63 is specifically configured to determine the position ranges of the N split screen areas according to the position information of the M dividing lines, respectively; and determining the target split screen area according to the target position information and the position ranges of the N split screen areas. And the target position information is positioned in the position range of the target split screen area.
In one possible implementation, the target operation includes: displaying the operation of a preset application interface; the N split screen areas further comprise a first split screen area, and a preset application interface is displayed in the first split screen area before a first input of a user is received. The execution module 64 is specifically configured to display a preset application interface in the target split-screen area, and display a first interface in the first split-screen area. Wherein, the first interface is: and for the target split screen area, before the operation of displaying the preset application interface is executed, displaying the interface in the target split screen area.
In one possible implementation, the target operation includes: displaying the operation of a preset application interface; the target split screen area comprises N split screen areas. The execution module 64 is specifically configured to control the operation execution device to exit the split screen mode, and display the preset application interface in the display screen.
According to the operation execution device provided by the embodiment of the application, when the operation execution device is in the split-screen display state, the operation execution device can acquire the position information corresponding to the gesture input according to the gesture input of a user, and determine the split-screen area required by the user from the N split-screen areas according to the position information corresponding to the gesture input and the position information of the N split-screen areas according to the position information corresponding to the gesture input, and execute the operation corresponding to the gesture input on the split-screen area required by the user instead of executing the operation corresponding to the gesture input on all display areas of the display screen.
The operation execution device in the embodiment of the present application may be a device, and may also be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine or a self-service machine, and the like, and the embodiment of the present application is not particularly limited.
The operation execution device in the embodiment of the present application may be a device having an operating system. The operating system may be an Android operating system (Android), an iOS operating system, or other possible operating systems, which is not specifically limited in the embodiments of the present application.
The operation execution device provided in the embodiment of the present application can implement each process implemented by the method embodiments of fig. 1 to fig. 7, and is not described here again to avoid repetition.
Optionally, as shown in fig. 9, an electronic device 70 is further provided in this embodiment of the present application, and includes a processor 72, a memory 71, and a program or an instruction stored in the memory 71 and executable on the processor 72, where the program or the instruction is executed by the processor 72 to implement each process of the operation execution method embodiment, and can achieve the same technical effect, and no further description is provided here to avoid repetition.
It should be noted that the electronic device in the embodiment of the present application includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 10 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 100 includes, but is not limited to: a radio frequency unit 101, a network module 102, an audio output unit 103, an input unit 104, a sensor 105, a display unit 106, a user input unit 107, an interface unit 108, a memory 109, and a processor 110.
Those skilled in the art will appreciate that the electronic device 100 may further comprise a power source (e.g., a battery) for supplying power to various components, and the power source may be logically connected to the processor 110 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system. The electronic device structure shown in fig. 10 does not constitute a limitation to the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is omitted here.
The user input unit 107 is used for receiving a first input of a user under the condition that a display screen of the electronic equipment comprises N screen splitting areas; n is a positive integer greater than 1.
The processor 110 is configured to, in response to a first input, obtain target location information corresponding to the first input; and determining a target split screen area from the N split screen areas according to the target position information and the position information of the N split screen areas, and executing a target operation corresponding to the first input on the target split screen area.
The electronic equipment provided by the embodiment of the application, because when the electronic equipment is in the split-screen display state, the electronic equipment can acquire the position information corresponding to the gesture input according to the gesture input of a user, and according to the position information corresponding to the gesture input and the position information of N split-screen areas, from the N split-screen areas, the split-screen area of the user requirement is determined, and the operation corresponding to the gesture input is executed on the split-screen area of the user requirement, but not the operation corresponding to the gesture input is executed on all the display areas of the display screen, therefore, the accuracy of the operation that the electronic equipment executes the user wants can be improved, and thus, the convenience of the electronic equipment executing operation can be improved.
Optionally, in an embodiment of the present application, the electronic device includes: at least two sensors disposed on a first side of a back cover of an electronic device; the first side is: proximate to a side of the body of the electronic device; the first input is: user input to the rear cover.
The processor 110 is specifically configured to obtain at least two sensing parameters respectively acquired by at least two sensors; each sensor corresponds to a sensing parameter; determining a target sensing parameter from the at least two sensing parameters; the target sensing parameters are: a maximum sensing parameter, or a minimum sensing parameter, of the at least two sensing parameters; and determining target position information according to the position information of the sensor corresponding to the target sensing parameters to acquire the target position information.
In the embodiment of the application, since at least two sensors are arranged on the first side of the rear cover of the electronic device, the electronic device can acquire two sensing parameters respectively acquired by the at least two sensors, and determine the sensor closest to the input position of the gesture input of the user, and determine the position information corresponding to the gesture input of the user according to the position information of the sensor, and determine the screen splitting area required by the user according to the position information corresponding to the gesture input of the user, and execute the operation corresponding to the gesture input to the screen splitting area required by the user, that is, the user can perform the gesture input on the rear cover, so that the electronic device can execute the operation corresponding to the gesture input to the screen splitting area required by the user, instead of only performing the gesture input on the display screen, therefore, the situation that the fingers of the user perform misoperation on the content in the display screen in the gesture input process of the user can be avoided, and thus, the accuracy of the electronic device in executing the operation desired by the user can be improved.
Optionally, in this embodiment of the present application, the processor 110 is further configured to determine the position ranges of the N split-screen areas according to the position information of the M dividing lines, respectively; and determining the target split screen area according to the target position information and the position ranges of the N split screen areas.
And the target position information is positioned in the position range of the target split screen area.
In the embodiment of the application, the electronic device can determine the position ranges of the N split screen areas according to the position information of the M dividing lines, so that the split screen areas corresponding to the gesture input of the user (namely the split screen areas required by the user) can be accurately determined according to the target position information and the position ranges of the N split screen areas, and thus, the electronic device can accurately perform the operation on the split screen areas required by the user, and therefore, the accuracy of the electronic device in performing the operation desired by the user can be improved.
Optionally, in this embodiment of the present application, the target operation includes: displaying the operation of a preset application interface; the N split screen areas also comprise a first split screen area; before receiving a first input of a user, a preset application interface is displayed in the first screen division area.
The display unit 106 is configured to display a preset application interface in the target split-screen area, and display a first interface in the first split-screen area.
Wherein the first interface is: and for the target split screen area, before the operation of displaying the preset application interface is executed, displaying the interface in the target split screen area.
In the embodiment of the application, the electronic equipment can display the preset application interface in the split screen area required by the user, and display the first interface in the split screen area required by the user in the first split screen area in which the preset application interface is displayed before the operation of displaying the preset application interface is executed, so that the user does not need to search the preset application interface one by one from the N split screen areas, the operation of the user can be simplified, and the flexibility of the display interface of the electronic equipment can be improved.
Optionally, in this embodiment of the present application, the target operation includes: displaying the operation of a preset application interface; the target split screen area comprises N split screen areas.
The processor 110 is further configured to control the electronic device to exit the split-screen mode.
The display unit 106 is further configured to display a preset application interface in the display screen.
In the embodiment of the application, because the electronic device can exit the split screen mode under the condition that the target split screen area comprises all the split screen areas, the preset application interface is displayed in all the display areas of the display screen without the need of multiple operations by a user, the operation of the user can be simplified, and thus, the efficiency of the display interface of the electronic device can be improved.
It should be understood that, in the embodiment of the present application, the input unit 104 may include a Graphics Processing Unit (GPU) 1041 and a microphone 1042, and the graphics processing unit 1041 processes image data of a still picture or a video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The display unit 106 may include a display panel 1061, and the display panel 1061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 107 includes a touch panel 1071 and other input devices 1072. The touch panel 1071 is also referred to as a touch screen. The touch panel 1071 may include two parts of a touch detection device and a touch controller. Other input devices 1072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein. The memory 109 may be used to store software programs as well as various data including, but not limited to, application programs and an operating system. The processor 110 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 110.
The embodiments of the present application further provide a readable storage medium, where a program or an instruction is stored, and when the program or the instruction is executed by a processor, the program or the instruction implements the processes of the foregoing operation execution method embodiments, and can achieve the same technical effects, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and the like.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to run a program or an instruction to implement each process of the above operation execution method embodiment, and can achieve the same technical effect, and the details are not repeated here to avoid repetition.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one of 8230, and" comprising 8230does not exclude the presence of additional like elements in a process, method, article, or apparatus comprising the element. Further, it should be noted that the scope of the methods and apparatuses in the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions recited, e.g., the described methods may be performed in an order different from that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (10)

1. An operation execution method, characterized in that the method comprises:
receiving a first input of a user under the condition that a display screen of the electronic equipment comprises N split screen areas; n is a positive integer greater than 1;
responding to the first input, and acquiring target position information corresponding to the first input;
determining a target split screen area from the N split screen areas according to the target position information and the position information of the N split screen areas, and executing target operation corresponding to the first input on the target split screen area;
the position information of the N split screen areas is position information of dividing lines of the N split screen areas, the target split screen area comprises at least one split screen area, and the target operation comprises at least one of the following items: displaying the operation of a preset application interface and adjusting the display parameter;
the N split screen areas also comprise a first split screen area;
before receiving a first input of a user, displaying the preset application interface in the first screen division area;
the target operation corresponding to the first input is executed on the target split screen area, and the target split screen area comprises the following steps:
displaying the preset application interface in the target screen division area, and displaying a first interface in the first screen division area;
wherein the first interface is: and for the target split screen area, before the operation of displaying the preset application interface is executed, displaying the interface in the target split screen area.
2. The method of claim 1, wherein the electronic device comprises: at least two sensors disposed on a first side of a back cover of the electronic device; the first side is: proximate to a side of a body of the electronic device; the first input is: user input to the rear cover;
the obtaining of the target position information corresponding to the first input includes:
acquiring at least two sensing parameters respectively acquired by the at least two sensors; each sensor corresponds to a sensing parameter;
determining a target sensing parameter from the at least two sensing parameters; the target sensing parameters are as follows: a maximum sensing parameter, or a minimum sensing parameter, of the at least two sensing parameters;
and determining the target position information according to the position information of the sensor corresponding to the target sensing parameter so as to acquire the target position information.
3. The method of claim 2, wherein the at least two sensors comprise: at least one first sensor and at least one second sensor; each first sensor is arranged on a first edge line, each second sensor is arranged on a second edge line, and the second edge line is adjacent to the first edge line;
the target sensing parameters include: a first target sensing parameter and a second target sensing parameter; the first target sensing parameter is: the maximum sensing parameter or the minimum sensing parameter in the at least one sensing parameter respectively acquired by the at least one first sensor; the second target sensing parameter is: the at least one second sensor respectively collects the maximum sensing parameter or the minimum sensing parameter in the at least one sensing parameter.
4. The method of claim 1, wherein the target operation comprises: displaying the operation of a preset application interface; the target split screen area comprises the N split screen areas;
the target operation corresponding to the first input is executed on the target split screen area, and the target split screen area comprises the following steps:
and controlling the electronic equipment to exit a split screen mode, and displaying the preset application interface in the display screen.
5. An operation execution apparatus, characterized in that the operation execution apparatus comprises: the device comprises a receiving module, an obtaining module, a determining module and an executing module;
the receiving module is used for receiving a first input of a user under the condition that a display screen of the operation execution device comprises N split screen areas; n is a positive integer greater than 1;
the obtaining module is configured to obtain, in response to the first input received by the receiving module, target location information corresponding to the first input;
the determining module is configured to determine a target split-screen area from the N split-screen areas according to the target position information acquired by the acquiring module and the position information of the N split-screen areas;
the execution module is configured to execute a target operation corresponding to the first input on the target split-screen area determined by the determination module;
the position information of the N split screen areas is position information of dividing lines of the N split screen areas, the target split screen area comprises at least one split screen area, and the target operation comprises at least one of the following items: displaying the operation of a preset application interface and adjusting the display parameter;
the N split screen areas also comprise a first split screen area;
before receiving a first input of a user, displaying the preset application interface in the first screen division area;
the execution module is specifically configured to display the preset application interface in the target split-screen area, and display a first interface in the first split-screen area;
wherein the first interface is: and for the target split screen area, before the operation of displaying the preset application interface is executed, displaying the interface in the target split screen area.
6. The operation execution apparatus according to claim 5, wherein the operation execution apparatus comprises: at least two sensors disposed on a first side of a back cover of the operation performing device; the first side is: one side of the body close to the operation executing device; the first input is: user input to the rear cover;
the acquisition module is specifically used for acquiring at least two sensing parameters respectively acquired by the at least two sensors; each sensor corresponds to a sensing parameter;
the determining module is further configured to determine a target sensing parameter from the at least two sensing parameters; the target sensing parameters are as follows: a maximum sensing parameter, or a minimum sensing parameter, of the at least two sensing parameters; and determining the target position information according to the position information of the sensor corresponding to the target sensing parameter so as to acquire the target position information.
7. The operation performing apparatus according to claim 6, wherein the at least two sensors include: at least one first sensor and at least one second sensor; each first sensor is arranged on a first edge line, each second sensor is arranged on a second edge line, and the second edge line is adjacent to the first edge line;
the target sensing parameters include: a first target sensing parameter and a second target sensing parameter; the first target sensing parameter is: the maximum sensing parameter or the minimum sensing parameter in the at least one sensing parameter respectively acquired by the at least one first sensor; the second target sensing parameter is: the at least one second sensor respectively collects the maximum sensing parameter or the minimum sensing parameter of the at least one sensing parameter.
8. The operation execution apparatus according to claim 5, wherein the target operation includes: displaying the operation of a preset application interface; the target split screen area comprises the N split screen areas;
the execution module is specifically configured to control the operation execution device to exit a split-screen mode, and display the preset application interface in the display screen.
9. An electronic device comprising a processor, a memory, and a program or instructions stored on the memory and executable on the processor, the program or instructions, when executed by the processor, implementing the steps of the operation execution method of any one of claims 1 to 4.
10. A readable storage medium, on which a program or instructions are stored, which when executed by a processor, implement the steps of the operation execution method of any one of claims 1 to 4.
CN202110106657.6A 2021-01-26 2021-01-26 Operation execution method and device and electronic equipment Active CN112783406B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110106657.6A CN112783406B (en) 2021-01-26 2021-01-26 Operation execution method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110106657.6A CN112783406B (en) 2021-01-26 2021-01-26 Operation execution method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN112783406A CN112783406A (en) 2021-05-11
CN112783406B true CN112783406B (en) 2023-02-03

Family

ID=75757459

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110106657.6A Active CN112783406B (en) 2021-01-26 2021-01-26 Operation execution method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN112783406B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113778279A (en) * 2021-08-31 2021-12-10 维沃移动通信有限公司 Screenshot method and device and electronic equipment
CN113760169A (en) * 2021-09-08 2021-12-07 联想(北京)有限公司 Control method and control device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW201203075A (en) * 2010-07-06 2012-01-16 Compal Electronics Inc Method for opening and arranging window
US9026935B1 (en) * 2010-05-28 2015-05-05 Google Inc. Application user interface with an interactive overlay
CN110308860A (en) * 2019-07-11 2019-10-08 Oppo广东移动通信有限公司 Screenshotss method and relevant apparatus

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110087963A1 (en) * 2009-10-09 2011-04-14 At&T Mobility Ii Llc User Interface Control with Edge Finger and Motion Sensing
CN105549856A (en) * 2015-07-25 2016-05-04 宇龙计算机通信科技(深圳)有限公司 Mobile terminal based instruction input method and apparatus
CN105183363B (en) * 2015-09-29 2019-01-01 努比亚技术有限公司 A kind of terminal and the touch control method based on pressure sensor
CN105389111B (en) * 2015-10-28 2019-05-17 维沃移动通信有限公司 A kind of operating method and electronic equipment of split screen display available
CN106569715A (en) * 2016-10-31 2017-04-19 努比亚技术有限公司 Terminal split screen control device and method and terminal
CN106886382A (en) * 2017-01-23 2017-06-23 努比亚技术有限公司 A kind of method and terminal for realizing split screen treatment
CN108632462A (en) * 2018-04-19 2018-10-09 Oppo广东移动通信有限公司 Processing method, device, storage medium and the electronic equipment of split screen display available
CN109634508B (en) * 2018-12-12 2020-11-06 维沃移动通信有限公司 User information loading method and device
CN110737386A (en) * 2019-09-06 2020-01-31 华为技术有限公司 screen capturing method and related equipment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9026935B1 (en) * 2010-05-28 2015-05-05 Google Inc. Application user interface with an interactive overlay
TW201203075A (en) * 2010-07-06 2012-01-16 Compal Electronics Inc Method for opening and arranging window
CN110308860A (en) * 2019-07-11 2019-10-08 Oppo广东移动通信有限公司 Screenshotss method and relevant apparatus

Also Published As

Publication number Publication date
CN112783406A (en) 2021-05-11

Similar Documents

Publication Publication Date Title
CN112783406B (en) Operation execution method and device and electronic equipment
CN113794795B (en) Information sharing method and device, electronic equipment and readable storage medium
CN112099684A (en) Search display method and device and electronic equipment
CN112433693B (en) Split screen display method and device and electronic equipment
CN112162684A (en) Parameter adjusting method and device and electronic equipment
CN113138818A (en) Interface display method and device and electronic equipment
CN112083854A (en) Application program running method and device
CN112486444A (en) Screen projection method, device, equipment and readable storage medium
CN112911147A (en) Display control method, display control device and electronic equipment
CN113655929A (en) Interface display adaptation processing method and device and electronic equipment
CN112416199B (en) Control method and device and electronic equipment
CN112399010B (en) Page display method and device and electronic equipment
CN112929734A (en) Screen projection method and device and electronic equipment
CN112764611A (en) Application program control method and device and electronic equipment
CN112416172A (en) Electronic equipment control method and device and electronic equipment
CN112099693A (en) Task display method and device and electronic equipment
CN108984097B (en) Touch operation method and device, storage medium and electronic equipment
CN111857465B (en) Application icon sorting method and device and electronic equipment
CN113495663B (en) Method and device for drawing rectangular layout, storage medium and electronic equipment
CN111913617B (en) Interface display method and device and electronic equipment
CN111796736B (en) Application sharing method and device and electronic equipment
CN113778311A (en) Operation method and device and electronic equipment
CN114115639A (en) Interface control method and device, electronic equipment and storage medium
CN114327726A (en) Display control method, display control device, electronic equipment and storage medium
CN113253884A (en) Touch method, touch device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant