CN111258480A - Operation execution method and device based on display area and storage medium - Google Patents

Operation execution method and device based on display area and storage medium Download PDF

Info

Publication number
CN111258480A
CN111258480A CN201811456594.1A CN201811456594A CN111258480A CN 111258480 A CN111258480 A CN 111258480A CN 201811456594 A CN201811456594 A CN 201811456594A CN 111258480 A CN111258480 A CN 111258480A
Authority
CN
China
Prior art keywords
display area
target
determining
user
trigger event
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811456594.1A
Other languages
Chinese (zh)
Inventor
李松
杜慧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Mobile Software Co Ltd
Original Assignee
Beijing Xiaomi Mobile Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaomi Mobile Software Co Ltd filed Critical Beijing Xiaomi Mobile Software Co Ltd
Priority to CN201811456594.1A priority Critical patent/CN111258480A/en
Publication of CN111258480A publication Critical patent/CN111258480A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04847Interaction techniques to control parameter settings, e.g. interaction with sliders or dials
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • G06V40/13Sensors therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements

Abstract

The disclosure relates to an operation execution method and device based on a display area and a storage medium, and belongs to the technical field of electronics. The method is applied to a foldable device, a display screen of the foldable device comprises a first display area and a second display area which take a rotating shaft as a boundary, and the method comprises the following steps: when the display screen is in a folded state, acquiring a target trigger event; determining a target display area in the first display area and the second display area according to the target trigger event, wherein the target display area is a display area expected to be used by a user; a specific control operation is performed based on the target display area. When the display screen is in the folded state, the foldable device provided by the embodiment of the disclosure can automatically judge which display area the user desires to use currently based on the acquired trigger event and automatically execute the related control operation based on the display area, so that the use experience of the user is remarkably improved, the functions of the foldable device are enriched, and the effect is better.

Description

Operation execution method and device based on display area and storage medium
Technical Field
The present disclosure relates to the field of electronic technologies, and in particular, to an operation execution method and apparatus based on a display area, and a storage medium.
Background
With the rapid development of electronic technology and the rise of diversified devices, foldable devices have become a development trend. Foldable equipment not only disposes folding casing, still disposes folding flexible display screen, and foldable equipment can reduce the space that occupies when fold condition, can provide great screen usable floor area for the user again when being in the extended state, promotes user's viewing effect.
In the related art, a foldable device is configured with two housings, a flexible display screen, and a rotating shaft, where the two housings are both movably connected to the rotating shaft and can rotate around the rotating shaft. When two casings are respectively positioned at two sides of the rotating shaft after rotation, the display screen is in an unfolded state, and the complete display screen can be displayed for a user at the same time. And when two casings are folded to the same side of the rotating shaft after rotation, the display screen is in a folded state, and the folding effect of the equipment is realized.
In an actual use process, when the display screen is in a folded state, a user usually has a use requirement only for one side of the display area of the display screen, and how to determine which side of the display area the user currently has a use requirement so as to perform a related control operation becomes a problem to be urgently solved by those skilled in the art.
Disclosure of Invention
To overcome the problems in the related art, the present disclosure provides a method, an apparatus, and a storage medium for performing an operation based on a display area.
According to a first aspect of embodiments of the present disclosure, there is provided a display region-based operation execution method applied to a foldable device whose display screen includes a first display region and a second display region that are demarcated by a rotation axis, the method including:
when the display screen is in a folded state, acquiring a target trigger event;
determining a target display area in the first display area and the second display area according to the target trigger event, wherein the target display area is a display area expected to be used by a user;
performing a specific control operation based on the target display area.
In a possible implementation manner, the obtaining a target trigger event includes:
respectively acquiring distance values between the first display area and the user and between the second display area and the user;
determining a target display area in the first display area and the second display area according to the target trigger event, including:
according to the obtained distance value, determining a display area close to the user in the first display area and the second display area, and determining the display area as the target display area.
In a possible implementation manner, the obtaining a target trigger event includes:
acquiring attitude information of the foldable equipment;
determining a target display area in the first display area and the second display area according to the target trigger event, including:
and according to the posture information, determining a display area facing the user in the first display area and the second display area, and determining the display area as the target display area.
In a possible implementation manner, the obtaining a target trigger event includes:
receiving the region selection information input by the user;
determining a target display area in the first display area and the second display area according to the target trigger event, including:
and determining a display area indicated by the area selection information in the first display area and the second display area, and determining the display area as the target display area.
In a possible implementation manner, the obtaining a target trigger event includes:
acquiring a light illumination numerical value at one side of the first display area and acquiring a light illumination numerical value at one side of the second display area;
determining a target display area in the first display area and the second display area according to the target trigger event, including:
and according to the acquired illumination numerical value, determining a display area with high illumination intensity in the first display area and the second display area, and determining the display area as the target display area.
In a possible implementation manner, the obtaining a target trigger event includes:
acquiring holding state information of the foldable equipment held by the user;
determining a target display area in the first display area and the second display area according to the target trigger event, including:
and determining the target display area in the first display area and the second display area according to the holding state information.
In a possible implementation manner, the determining the target display area in the first display area and the second display area according to the holding state information includes:
determining a display area pressed by a thumb in the first display area and the second display area, and determining the display area as the target display area; or the like, or, alternatively,
determining a display area with large pressing pressure in the first display area and the second display area, and determining the display area as the target display area; or the like, or, alternatively,
determining a display area in which a thumb is pressed at a specified position and a pressing pressure is greater than a threshold value, and determining the display area as the target display area, in the first display area and the second display area; or, in the first display area and the second display area, the display area with the thumb pressed on the side close to the user and the rest fingers pressed on the side far from the user is determined, and the target display area is determined by the display area.
In a possible implementation manner, the obtaining a target trigger event includes:
acquiring the number of human faces positioned on one side of the first display area and the number of human faces positioned on one side of the second display area;
determining a target display area in the first display area and the second display area according to the target trigger event, including:
when the number of the human faces on one side is zero and the number of the human faces on the other side is not zero, determining a display area on one side with the number of the human faces not being zero as the target display area; or the like, or, alternatively,
and when the number of the human faces on the two sides is not zero and the number of the human faces on one side is smaller than that of the human faces on the other side, determining the display area on one side with the small number of the human faces as the target display area.
In one possible implementation, the performing a specific control operation based on the target display area includes:
when the display area on the side where the number of the human faces is not zero is determined as the target display area, adjusting the current shooting parameters to be front shooting parameters;
and when the side display area with the small number of the human faces is determined as the target display area, adjusting the current shooting parameters to be rear shooting parameters.
In one possible implementation, the performing a specific control operation based on the target display area includes:
lighting up the target display area; or the like, or, alternatively,
the target display area is lighted, and a lighting cancellation operation is performed on another display area other than the target display area.
According to a second aspect of the embodiments of the present disclosure, there is provided an operation execution apparatus based on a display region, the apparatus being applied to a foldable device whose display screen includes a first display region and a second display region that are demarcated by a rotation axis, the apparatus including:
an acquisition module configured to acquire a target trigger event when the display screen is in a folded state;
a determining module configured to determine a target display area in the first display area and the second display area according to the target trigger event, wherein the target display area is a display area desired to be used by a user;
an execution module configured to execute a specific control operation based on the target display area.
In a possible implementation manner, the obtaining module is further configured to obtain distance values between the first display area and the user and between the second display area and the user;
the determining module is further configured to determine, according to the obtained distance value, a display area close to the user in the first display area and the second display area, and determine the display area as the target display area.
In a possible implementation manner, the obtaining module is further configured to obtain posture information of the foldable device;
the determination module is further configured to determine a display area facing the user in the first display area and the second display area according to the posture information, and determine the display area as the target display area.
In a possible implementation manner, the obtaining module is further configured to receive the area selection information input by the user;
the determination module is further configured to determine, in the first display area and the second display area, a display area indicated by the area selection information, and determine the display area as the target display area.
In a possible implementation manner, the obtaining module is further configured to obtain a light illumination value on one side of the first display area and obtain a light illumination value on one side of the second display area;
the determining module is further configured to determine, according to the acquired light illuminance numerical value, a display area with high light intensity in the first display area and the second display area, and determine the display area as the target display area.
In a possible implementation manner, the obtaining module is further configured to obtain holding state information of the user holding the foldable device;
the determining module is further configured to determine the target display area in the first display area and the second display area according to the holding state information.
In a possible implementation manner, the determining module is further configured to determine a display area pressed by a thumb in the first display area and the second display area, and determine the display area as the target display area; or, determining a display area with a large pressing pressure in the first display area and the second display area, and determining the display area as the target display area; or, in the first display area and the second display area, determining a display area which is pressed by a thumb at a specified position and has a pressing pressure greater than a threshold value, and determining the display area as the target display area; or, in the first display area and the second display area, the display area with the thumb pressed on the side close to the user and the rest fingers pressed on the side far from the user is determined, and the target display area is determined by the display area.
In a possible implementation manner, the obtaining module is further configured to obtain the number of faces located on one side of the first display area, and obtain the number of faces located on one side of the second display area;
the determining module is further configured to determine a side display area with the non-zero face number as the target display area when the face number on one side is zero and the face number on the other side is non-zero; or when the number of the human faces on the two sides is not zero and the number of the human faces on one side is smaller than that of the human faces on the other side, determining the display area on one side with the small number of the human faces as the target display area.
In a possible implementation manner, the execution module is further configured to adjust the current image capturing parameter to a pre-image capturing parameter when the display area on the side where the number of faces is not zero is determined as the target display area; and when the side display area with less human faces is determined as the target display area, adjusting the current shooting parameters to be the rear shooting parameters.
In one possible implementation, the execution module is further configured to illuminate the target display area; or, the target display area is lighted, and a lighting cancellation operation is performed on another display area except the target display area.
According to a third aspect of the embodiments of the present disclosure, there is provided an operation execution apparatus based on a display region, the apparatus being applied to a foldable device whose display screen includes a first display region and a second display region that are demarcated by a rotation axis, the apparatus including:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to: when the display screen is in a folded state, acquiring a target trigger event; determining a target display area in the first display area and the second display area according to the target trigger event, wherein the target display area is a display area expected to be used by a user; performing a specific control operation based on the target display area.
According to a fourth aspect of the embodiments of the present disclosure, there is provided a storage medium having stored thereon computer program instructions, which when executed by a processor, implement the display area based operation execution method of the first aspect described above.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects:
when the display screen is in a folded state, the foldable device provided by the embodiment of the disclosure can automatically judge which display area the user desires to use currently based on the acquired trigger event and automatically execute related control operation based on the display area, so that the use experience of the user is remarkably improved, the functions of the foldable device are enriched, and the effect is better.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
FIG. 1 is a schematic diagram illustrating the structure of a foldable device according to an exemplary embodiment.
FIG. 2 is a schematic diagram illustrating the construction of another foldable device according to an exemplary embodiment.
Fig. 3 is a flowchart illustrating a display area-based operation control method according to an exemplary embodiment.
Fig. 4 is a flowchart illustrating another operation control method based on a display area according to an exemplary embodiment.
Fig. 5 is a block diagram illustrating a display area-based operation control apparatus according to an exemplary embodiment.
Fig. 6 is a block diagram illustrating another display area based operation control apparatus according to an exemplary embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present invention. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the invention, as detailed in the appended claims.
Before explaining the embodiments of the present disclosure in detail, the structure of the foldable device will be explained.
Fig. 1 is a schematic structural diagram illustrating a foldable device according to an exemplary embodiment, where the foldable device may be a mobile phone, a tablet computer, or the like, and referring to fig. 1, the foldable device includes: a housing 11 and a display screen 12. One side of the display screen 12 is attached to the housing 11, and the other side is used for displaying.
The housing 11 is a foldable housing, and the display screen 12 is a flexible display screen, which is made of a flexible material, such as plastic, metal foil, or other materials, and is a bendable and deformable display component, and the housing 11 and the display screen 12 can be unfolded or folded, and the housing 11 and the display screen 12 can form a foldable device. For example, the middle of the housing 11 is provided with a rotating shaft, and the housings on both sides of the middle can rotate around the rotating shaft, so as to control the folding or unfolding of the housing 11 and drive the display screen 12 to fold or unfold.
Fig. 2 shows a schematic structural view of another foldable device in an unfolded state. Referring to fig. 2, the display screen 12 includes a first display region and a second display region that are demarcated by a rotation axis. Illustratively, the first display area and the second display area are of the same size, one display area being above and the other display area being below when the display screen 12 is in the folded state. The camera, the flash and other functional modules may be disposed on the second display area according to a folding direction, which is not specifically limited in the embodiments of the present disclosure.
Fig. 3 is a flowchart illustrating a method for performing a display area based operation, as shown in fig. 3, for use in a foldable device, according to an exemplary embodiment, including the following steps.
In step 301, a target trigger event is acquired when the display screen is in a folded state.
In step 302, according to a target trigger event, determining a target display area in a first display area and a second display area, wherein the target display area is a display area expected to be used by a user;
in step 303, a specific control operation is performed based on the target display area.
According to the method provided by the embodiment of the disclosure, when the display screen is in the folded state, the foldable device provided by the embodiment of the disclosure can automatically judge which display area the user desires to use currently based on the acquired trigger event and automatically execute the relevant control operation based on the judgment, so that the use experience of the user is remarkably improved, the functions of the foldable device are enriched, and the effect is better.
In a possible implementation manner, the obtaining a target trigger event includes:
respectively acquiring distance values between the first display area and the user and between the second display area and the user;
determining a target display area in the first display area and the second display area according to the target trigger event, including:
according to the obtained distance value, determining a display area close to the user in the first display area and the second display area, and determining the display area as the target display area.
In a possible implementation manner, the obtaining a target trigger event includes:
acquiring attitude information of the foldable equipment;
determining a target display area in the first display area and the second display area according to the target trigger event, including:
and according to the posture information, determining a display area facing the user in the first display area and the second display area, and determining the display area as the target display area.
In a possible implementation manner, the obtaining a target trigger event includes:
receiving the region selection information input by the user;
determining a target display area in the first display area and the second display area according to the target trigger event, including:
and determining a display area indicated by the area selection information in the first display area and the second display area, and determining the display area as the target display area.
In a possible implementation manner, the obtaining a target trigger event includes:
acquiring a light illumination numerical value at one side of the first display area and acquiring a light illumination numerical value at one side of the second display area;
determining a target display area in the first display area and the second display area according to the target trigger event, including:
and according to the acquired illumination numerical value, determining a display area with high illumination intensity in the first display area and the second display area, and determining the display area as the target display area.
In a possible implementation manner, the obtaining a target trigger event includes:
acquiring holding state information of the foldable equipment held by a user;
determining a target display area in the first display area and the second display area according to the target trigger event, including:
and determining the target display area in the first display area and the second display area according to the holding state information.
In a possible implementation manner, the determining the target display area in the first display area and the second display area according to the holding state information includes:
determining a display area pressed by a thumb in the first display area and the second display area, and determining the display area as the target display area; or the like, or, alternatively,
determining a display area with large pressing pressure in the first display area and the second display area, and determining the display area as the target display area; or the like, or, alternatively,
determining a display area in which a thumb is pressed at a specified position and a pressing pressure is greater than a threshold value, and determining the display area as the target display area, in the first display area and the second display area; or, in the first display area and the second display area, the display area with the thumb pressed on the side close to the user and the rest fingers pressed on the side far from the user is determined, and the target display area is determined by the display area.
In a possible implementation manner, the obtaining a target trigger event includes:
acquiring the number of human faces positioned on one side of the first display area and the number of human faces positioned on one side of the second display area;
determining a target display area in the first display area and the second display area according to the target trigger event, including:
when the number of the human faces on one side is zero and the number of the human faces on the other side is not zero, determining a display area on one side with the number of the human faces not being zero as the target display area; or the like, or, alternatively,
and when the number of the human faces on the two sides is not zero and the number of the human faces on one side is smaller than that of the human faces on the other side, determining the display area on one side with the small number of the human faces as the target display area.
In one possible implementation, the performing a specific control operation based on the target display area includes:
when the display area on the side with the non-zero face number is determined as the target display area, shooting is carried out based on the preposed shooting parameters;
and when the side display area with the small number of the human faces is determined as the target display area, shooting is carried out based on the post-shooting parameters.
In one possible implementation, the performing a specific control operation based on the target display area includes:
lighting up the target display area; alternatively, the first and second electrodes may be,
the target display area is lighted, and a lighting cancellation operation is performed on another display area other than the target display area.
All the above optional technical solutions may be combined arbitrarily to form the optional embodiments of the present disclosure, and are not described herein again.
Fig. 4 is a flowchart illustrating a method for performing a display area based operation, as shown in fig. 4, for use in a foldable device, according to an exemplary embodiment.
In step 401, when the display screen is in a folded state, the foldable device acquires a target trigger event, and determines a target display area in the first display area and the second display area according to the target trigger event.
When the foldable device is in the folded state, it indicates that the user does not currently have a large screen usage requirement, and at this time, the user usually has a usage requirement only for one side of the display area. Based on this, the embodiment of the present disclosure determines which display area a user currently desires to use by acquiring a target trigger event and determining a result according to the target trigger event, and performs a related control operation according to the determination result. Wherein the display area that the user desires to use is also referred to herein as the target display area.
In the embodiment of the present disclosure, the types of the target trigger event include a plurality of types, and accordingly, the determination of the target display area in the first display area and the second display area includes, but is not limited to, the following forms:
first, determining a target display area based on distance information
For this manner, a distance sensor may be respectively disposed inside the housing below the first display area and the second display area, and taking the case that the housing 11 shown in fig. 1 includes a first housing and a second housing as an example, where the first housing is located below the first display area, the second housing is located below the second display area, and then a distance sensor is disposed inside the first housing, and a distance sensor is disposed inside the second housing.
When the display screen is in the folded state, for this way, the obtaining of the target trigger event may be: the foldable device respectively obtains the distance values between the first display area and the user and the distance values between the second display area and the user.
Accordingly, determining the target display area in the first display area and the second display area according to the target trigger event may be: and according to the acquired distance value, determining a display area close to the user in the first display area and the second display area, and determining the display area as a target display area.
That is, the foldable device obtains a first distance value between the first display region and the user and obtains a second distance value between the second display region and the user, and then determines the minimum of the first distance value and the second distance value, thereby determining which display region is close to the user as the display region that the user currently desires to use.
Second, determining a target display area based on the pose information
In the embodiment of the present disclosure, the attitude information of the foldable device may be measured using a gyroscope, an accelerometer, or the like built in the foldable device, which is not particularly limited in the embodiment of the present disclosure.
When the display screen is in the folded state, for this way, the obtaining of the target trigger event may be: attitude information of the foldable device is acquired. The attitude information may include an acceleration value or a position coordinate value of the foldable device on a horizontal axis, a vertical axis, and a vertical axis, which is not particularly limited in the embodiments of the present disclosure.
Accordingly, determining the target display area in the first display area and the second display area according to the target trigger event may be: and according to the acquired gesture information, determining a display area facing the user in the first display area and the second display area, and determining the display area as a target display area.
For example, assuming that the foldable device is placed on a desktop in a folded state, one of the first display region and the second display region that faces or is close to the user is determined based on the acquired posture information, and the display region is determined as a display region that the user desires to use.
Third, determining a target display area based on user selection
When the display screen is in the folded state, for this way, the obtaining of the target trigger event may be: region selection information input by a user is received.
Wherein the user may input the region selection information by: for example, the user may set in advance a selection gesture operation corresponding to the first display region and the second display region, respectively. When the foldable device detects that the user has performed the correct selection gesture on a display area, it indicates that the user currently wishes to use the display area. In addition to this manner of inputting the area selection information, other manners may also be adopted, and this is not particularly limited in the embodiments of the present disclosure. For example, a physical key is set for each of the two display areas, and when it is detected that the physical key of one display area is triggered, a trigger signal of the user is received, and it is determined that the user selects the display area. For another example, a physical key is set for two display areas, and when one display area is in a lighting state, if the physical key is detected to be triggered, a trigger signal of the user is received, and it is determined that the user currently desires to use the other unlit display area.
Accordingly, determining the target display area in the first display area and the second display area according to the target trigger event may be: and determining the display area indicated by the area selection information in the first display area and the second display area, and determining the display area as a target display area.
Fourth, determining a target display area based on the illuminance value
In this way, one ambient light sensor may be respectively disposed inside the housings below the first display area and the second display area, and taking the example that the housing 11 shown in fig. 1 includes the first housing and the second housing as an example, one ambient light sensor is disposed inside the first housing, and one ambient light sensor is disposed inside the second housing.
When the display screen is in the folded state, for this way, the obtaining of the target trigger event may be: and acquiring the illumination numerical value of one side of the first display area and the illumination numerical value of one side of the second display area.
Accordingly, determining the target display area in the first display area and the second display area according to the target trigger event may be: and according to the acquired illumination numerical value, determining a display area with high illumination intensity in the first display area and the second display area, and determining the display area as a target display area.
That is, the foldable device acquires a first illuminance value on one side of the first display region and a second illuminance value on one side of the second display region, and then determines the greater of the first illuminance value and the second illuminance value, thereby determining which display region of the ambient light having the greater light intensity is the display region that the user currently desires to use.
Fifth, determining a target display area based on user's grip posture information
When the display screen is in the folded state, the embodiment of the present disclosure further supports the handheld state of the user to determine the region, and for this way, the way for the foldable device to acquire the target trigger event may be: acquiring holding state information when a user holds the foldable equipment; accordingly, determining the target display area in the first display area and the second display area according to the target trigger event may be: and determining a target display area in the first display area and the second display area according to the holding state information.
The types of the holding posture information are divided into a plurality of types, and the modes of determining the target display area in the first display area and the second display area are different according to the difference of the holding posture information, and the details are as follows:
a. and determining a display area pressed by the thumb in the first display area and the second display area, and determining the display area as a target display area.
The way the user holds the foldable device is typically that the thumb is placed on the side of the display area desired for use, while the remaining four fingers are placed on the back of the display area, i.e. on the other side of the display area, for which the disclosed embodiments determine the side of the display area pressed by the thumb as the target display area.
b. A display area with a large pressing pressure is determined in the first display area and the second display area, and the display area is determined as a target display area.
In this step, if one pressure sensor is respectively disposed in the casings below the first display area and the second display area, and the example that the casing 11 shown in fig. 1 includes the first casing and the second casing is continued, one pressure sensor is disposed in the first casing, and one pressure sensor is disposed in the second casing.
Based on the above structural arrangement, the embodiment of the present disclosure further supports determining the target display area by detecting the pressing pressure, because the pressing pressure borne by the display area on one side directly pressed by the thumb is greater than the pressing pressure borne by the display area on the other side, based on which the two pressure sensors can respectively detect the pressing pressures borne by the two display areas, and determine the display area with the greater pressing pressure borne as the target display area.
c. And determining a display area, in which the thumb is pressed at the designated position and the pressing pressure is greater than the threshold value, in the first display area and the second display area, and determining the display area as a target display area.
In addition to the above two ways, the embodiment of the present disclosure also supports determining the target display area in a way of combining multiple factors, such as a way of combining pressure, position and touch shape, for example, the determination way may be that a thumb presses at a specific position and the pressing pressure is greater than a threshold value. The designated position may be a geometric center point position of the display area, which is not specifically limited in this disclosure.
d. And determining a display area, which is pressed by a thumb on the side close to the user and the rest fingers on the side far from the user, in the first display area and the second display area, and determining the display area as a target display area.
For this step, which may be implemented in conjunction with a distance sensor and a pressure sensor, the user typically places the thumb, illustratively, on the side of the display area desired to be used, while the remaining four fingers are placed on the back of the side of the display area, i.e., on the other side of the display area, and thus may select this way to determine the target display area.
Sixthly, determining a target display area based on face detection
The embodiment of the present disclosure also supports determining a target display area based on face detection, and for this way, acquiring a target trigger event may be: the number of faces on one side of the first display area is acquired, and the number of faces on one side of the second display area is acquired. In implementation, the number of faces on the side of the first display area and the number of faces on the side of the second display area may be acquired based on a camera, which is generally disposed on the second housing, as shown in fig. 1 and 2.
Accordingly, determining the target display area in the first display area and the second display area according to the target trigger event may be: when the number of the human faces on one side is zero and the number of the human faces on the other side is not zero, determining a display area on one side with the number of the human faces not being zero as a target display area; or when the number of the human faces on the two sides is not zero and the number of the human faces on one side is smaller than that of the human faces on the other side, determining the display area on one side with the small number of the human faces as the target display area.
For the first situation, when the number of faces on one side is zero and the number of faces on the other side is not zero, the first situation generally corresponds to a self-shooting scene, and for the scene, in order to realize self-shooting, the foldable device automatically determines a display area on one side with the number of faces not zero as a target display area, so that a user using the foldable device can conveniently complete self-shooting. For the second situation, when the number of faces on both sides is not zero and the number of faces on one side is smaller than the number of faces on the other side, the scene is usually photographed for others, for example, the number of faces on one side is 1, and the number of faces on the other side is multiple.
In step 402, the foldable device performs a specific control operation based on the target display area.
In the disclosed embodiment, after determining the display area that the user desires to use, the foldable device typically performs two control operations, one of which is to light the display area or switch the display area that should be lit, and the other is to adjust the imaging parameters.
For the case of adjusting the imaging parameters, certain control operations are performed based on the target display area, including but not limited to: when the display area on the side with the non-zero face number is determined as the target display area, shooting is carried out based on the preposed shooting parameters; the method corresponds to a self-shooting scene, so that the current shooting parameters are adjusted to be the front shooting parameters. Or when the side display area with the small number of human faces is determined as the target display area, shooting is carried out based on the post-shooting parameters. This kind of correspondence is for other people's shooting scene, so adjust the parameter of making a video recording at present to the parameter of making a video recording of postposition.
For the case of lighting the display area or switching the display area that should be lit, certain control operations are performed based on the target display area, including but not limited to: lighting up the target display area; the corresponding whole display screen is in a screen-off state, and after a target display area is determined in the two display areas, the target display area is directly lightened.
Or, the target display area is lit up, and a lighting-up canceling operation is performed on another display area other than the target display area. The foldable device automatically executes screen-off operation on the current display area in the lighting state after automatically judging that the current user expects to use the other display area, and cancels lighting; and for another display area currently in the screen-off state, the lighting operation is automatically performed.
According to the method provided by the embodiment of the disclosure, when the display screen is in the folded state, the display area which the user desires to use currently can be automatically judged based on the acquired trigger event, and the relevant control operation can be automatically executed based on the judgment, so that the use experience of the user is remarkably improved, the functions of the foldable device are enriched, and the effect is better.
Fig. 5 is a block diagram illustrating a display region-based operation execution apparatus applied to a foldable device whose display screen includes a first display region and a second display region demarcated by a rotation axis according to an exemplary embodiment. Referring to fig. 5, the apparatus includes an obtaining module 501, a determining module 502, and an executing module 503.
An obtaining module 501 configured to obtain a target trigger event when the display screen is in a folded state; a determining module 502 configured to determine a target display area in the first display area and the second display area according to the target trigger event, where the target display area is a display area that a user desires to use; an execution module 503 configured to execute a specific control operation based on the target display area.
The device provided by the embodiment of the disclosure can automatically judge which display area a user desires to use currently based on the acquired trigger event when the display screen is in a folded state, and automatically execute related control operation based on the acquired trigger event, so that the use experience of the user is remarkably improved, the functions of foldable equipment are enriched, and the effect is better.
In a possible implementation manner, the obtaining module 501 is further configured to obtain distance values between the first display area and the user and the second display area;
the determining module 502 is further configured to determine, according to the obtained distance value, a display area close to the user in the first display area and the second display area, and determine the display area as the target display area.
In a possible implementation manner, the obtaining module 501 is further configured to obtain posture information of the foldable device;
a determining module 502, further configured to determine, according to the posture information, a display area facing the user in the first display area and the second display area, and determine the display area as the target display area.
In a possible implementation manner, the obtaining module 501 is further configured to receive the area selection information input by the user;
the determining module 502 is further configured to determine, in the first display area and the second display area, a display area indicated by the area selection information, and determine the display area as the target display area.
In a possible implementation manner, the obtaining module 501 is further configured to obtain a light intensity value on a side of the first display area, and obtain a light intensity value on a side of the second display area;
the determining module 502 is further configured to determine, according to the obtained light illuminance numerical value, a display area with a large light intensity in the first display area and the second display area, and determine the display area as the target display area.
In a possible implementation manner, the obtaining module 501 is further configured to obtain holding state information of the user holding the foldable device;
a determining module 502, further configured to determine the target display area in the first display area and the second display area according to the holding state information.
In a possible implementation, the determining module 502 is further configured to determine a display area pressed by a thumb in the first display area and the second display area, and determine the display area as the target display area; or, determining a display area with a large pressing pressure in the first display area and the second display area, and determining the display area as the target display area; or, in the first display area and the second display area, determining a display area which is pressed by a thumb at a specified position and has a pressing pressure greater than a threshold value, and determining the display area as the target display area; or, in the first display area and the second display area, the display area with the thumb pressed on the side close to the user and the rest fingers pressed on the side far from the user is determined, and the target display area is determined by the display area.
In a possible implementation manner, the obtaining module 501 is further configured to obtain the number of faces located on one side of the first display area, and obtain the number of faces located on one side of the second display area;
a determining module 502, further configured to determine, as the target display area, a side display area where the number of faces is not zero when the number of faces on one side is zero and the number of faces on the other side is not zero; or when the number of the human faces on the two sides is not zero and the number of the human faces on one side is smaller than that of the human faces on the other side, determining the display area on one side with the small number of the human faces as the target display area.
In a possible implementation manner, the execution module 503 is further configured to, when a side display area with a non-zero number of faces is determined as the target display area, adjust the current image capturing parameter to a pre-image capturing parameter; and when the side display area with less human faces is determined as the target display area, adjusting the current shooting parameters to be the rear shooting parameters.
In a possible implementation, the execution module 503 is further configured to illuminate the target display area; or, the target display area is lighted, and a lighting cancellation operation is performed on another display area except the target display area.
All the above optional technical solutions may be combined arbitrarily to form the optional embodiments of the present disclosure, and are not described herein again.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
Fig. 6 is a block diagram illustrating a display area-based operation performing apparatus 600 according to an exemplary embodiment. For example, the apparatus 600 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
Referring to fig. 6, apparatus 600 may include one or more of the following components: processing component 602, memory 604, power component 606, multimedia component 608, audio component 610, interface to I/O (Input/Output) 612, sensor component 614, and communication component 616.
The processing component 602 generally controls overall operation of the device 600, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 602 may include one or more processors 620 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 602 can include one or more modules that facilitate interaction between the processing component 602 and other components. For example, the processing component 602 can include a multimedia module to facilitate interaction between the multimedia component 608 and the processing component 602.
The memory 604 is configured to store various types of data to support operations at the apparatus 600. Examples of such data include instructions for any application or method operating on device 600, contact data, phonebook data, messages, pictures, videos, and so forth. The Memory 604 may be implemented by any type of volatile or non-volatile Memory device or combination thereof, such as an SRAM (Static Random Access Memory), an EEPROM (Electrically-Erasable Programmable Read-Only Memory), an EPROM (Erasable Programmable Read-Only Memory), a PROM (Programmable Read-Only Memory), a ROM (Read-Only Memory), a magnetic Memory, a flash Memory, a magnetic disk, or an optical disk.
Power supply component 606 provides power to the various components of device 600. The power components 606 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the apparatus 600.
The multimedia component 608 includes a screen that provides an output interface between the device 600 and a user. In some embodiments, the screen may include an LCD (Liquid Crystal Display) and a TP (touch panel). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 608 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the device 600 is in an operating mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 610 is configured to output and/or input audio signals. For example, audio component 610 includes a MIC (microphone) configured to receive external audio signals when apparatus 600 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signal may further be stored in the memory 604 or transmitted via the communication component 616. In some embodiments, audio component 610 further includes a speaker for outputting audio signals.
The I/O interface 612 provides an interface between the processing component 602 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor component 614 includes one or more sensors for providing status assessment of various aspects of the apparatus 600. For example, the sensor component 614 may detect an open/closed state of the device 600, the relative positioning of the components, such as a display and keypad of the apparatus 600, the sensor component 614 may also detect a change in position of the apparatus 600 or a component of the apparatus 600, the presence or absence of user contact with the apparatus 600, orientation or acceleration/deceleration of the apparatus 600, and a change in temperature of the apparatus 600. The sensor assembly 614 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 614 may also include a light sensor, such as a CMOS (Complementary Metal oxide semiconductor) or CCD (Charge-coupled Device) image sensor, for use in imaging applications. In some embodiments, the sensor assembly 614 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 616 is configured to facilitate communications between the apparatus 600 and other devices in a wired or wireless manner. The apparatus 600 may access a wireless network based on a communication standard, such as Wi-Fi, 2G, or 3G, or a combination thereof. In an exemplary embodiment, the communication component 616 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the Communication component 616 further includes a Near Field Communication (NFC) module to facilitate short-range communications.
In an exemplary embodiment, the apparatus 600 may be implemented by one or more ASICs (Application specific integrated circuits), DSPs (Digital signal processors), DSPDs (Digital signal processing devices), PLDs (Programmable logic devices), FPGAs (Field Programmable Gate arrays), controllers, microcontrollers, microprocessors or other electronic components for performing the above-described operation execution method based on the display region.
In an exemplary embodiment, a non-transitory computer-readable storage medium comprising instructions, such as the memory 604 comprising instructions, executable by the processor 620 of the apparatus 600 to perform the above-described display region-based operation execution method is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a RAM (random access Memory), a CD-ROM (Compact Disc Read-Only Memory), a magnetic tape, a floppy disk, an optical data storage device, and the like.
Other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the invention and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims.
It will be understood that the invention is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the invention is limited only by the appended claims.

Claims (22)

1. An operation execution method based on a display region, the method being applied to a foldable device whose display screen includes a first display region and a second display region demarcated by a rotation axis, the method comprising:
when the display screen is in a folded state, acquiring a target trigger event;
determining a target display area in the first display area and the second display area according to the target trigger event, wherein the target display area is a display area expected to be used by a user;
performing a specific control operation based on the target display area.
2. The method of claim 1, wherein obtaining the target trigger event comprises: respectively acquiring distance values between the first display area and the user and between the second display area and the user;
determining a target display area in the first display area and the second display area according to the target trigger event, including:
according to the obtained distance value, determining a display area close to the user in the first display area and the second display area, and determining the display area as the target display area.
3. The method of claim 1, wherein obtaining the target trigger event comprises: acquiring attitude information of the foldable equipment;
determining a target display area in the first display area and the second display area according to the target trigger event, including:
and according to the posture information, determining a display area facing the user in the first display area and the second display area, and determining the display area as the target display area.
4. The method of claim 1, wherein obtaining the target trigger event comprises: receiving the region selection information input by the user;
determining a target display area in the first display area and the second display area according to the target trigger event, including:
and determining a display area indicated by the area selection information in the first display area and the second display area, and determining the display area as the target display area.
5. The method of claim 1, wherein obtaining the target trigger event comprises: acquiring a light illumination numerical value at one side of the first display area and acquiring a light illumination numerical value at one side of the second display area;
determining a target display area in the first display area and the second display area according to the target trigger event, including:
and according to the acquired illumination numerical value, determining a display area with high illumination intensity in the first display area and the second display area, and determining the display area as the target display area.
6. The method of claim 1, wherein obtaining the target trigger event comprises: acquiring holding state information of the foldable equipment held by the user;
determining a target display area in the first display area and the second display area according to the target trigger event, including:
and determining the target display area in the first display area and the second display area according to the holding state information.
7. The method according to claim 6, wherein the determining the target display area in the first display area and the second display area according to the holding state information comprises:
determining a display area pressed by a thumb in the first display area and the second display area, and determining the display area as the target display area; or the like, or, alternatively,
determining a display area with large pressing pressure in the first display area and the second display area, and determining the display area as the target display area; or the like, or, alternatively,
determining a display area in which a thumb is pressed at a specified position and a pressing pressure is greater than a threshold value, and determining the display area as the target display area, in the first display area and the second display area; or the like, or, alternatively,
and determining a display area, which is pressed by a thumb on the side close to the user and pressed by the rest fingers on the side far from the user, in the first display area and the second display area, and determining the target display area by the display area.
8. The method of claim 1, wherein obtaining the target trigger event comprises: acquiring the number of human faces positioned on one side of the first display area and the number of human faces positioned on one side of the second display area;
determining a target display area in the first display area and the second display area according to the target trigger event, including:
when the number of the human faces on one side is zero and the number of the human faces on the other side is not zero, determining a display area on one side with the number of the human faces not being zero as the target display area; or the like, or, alternatively,
and when the number of the human faces on the two sides is not zero and the number of the human faces on one side is smaller than that of the human faces on the other side, determining the display area on one side with the small number of the human faces as the target display area.
9. The method of claim 8, wherein performing a particular control operation based on the target display area comprises:
when the display area on the side where the number of the human faces is not zero is determined as the target display area, adjusting the current shooting parameters to be front shooting parameters;
and when the side display area with the small number of the human faces is determined as the target display area, adjusting the current shooting parameters to be rear shooting parameters.
10. The method of claim 1, wherein performing a particular control operation based on the target display area comprises:
lighting up the target display area; or the like, or, alternatively,
the target display area is lighted, and a lighting cancellation operation is performed on another display area other than the target display area.
11. An operation execution apparatus based on a display region, the apparatus being applied to a foldable device whose display screen includes a first display region and a second display region demarcated by a rotation axis, the apparatus comprising:
an acquisition module configured to acquire a target trigger event when the display screen is in a folded state;
a determining module configured to determine a target display area in the first display area and the second display area according to the target trigger event, wherein the target display area is a display area desired to be used by a user;
an execution module configured to execute a specific control operation based on the target display area.
12. The apparatus according to claim 11, wherein the obtaining module is further configured to obtain distance values between the first display area and the user and the second display area, respectively;
the determining module is further configured to determine, according to the obtained distance value, a display area close to the user in the first display area and the second display area, and determine the display area as the target display area.
13. The apparatus of claim 11, wherein the obtaining module is further configured to obtain pose information of the foldable device;
the determination module is further configured to determine a display area facing the user in the first display area and the second display area according to the posture information, and determine the display area as the target display area.
14. The apparatus of claim 11, wherein the obtaining module is further configured to receive the user-entered region selection information;
the determination module is further configured to determine, in the first display area and the second display area, a display area indicated by the area selection information, and determine the display area as the target display area.
15. The apparatus according to claim 11, wherein the obtaining module is further configured to obtain a value of illuminance on a side of the first display area and obtain a value of illuminance on a side of the second display area;
the determining module is further configured to determine, according to the acquired light illuminance numerical value, a display area with high light intensity in the first display area and the second display area, and determine the display area as the target display area.
16. The apparatus according to claim 11, wherein the obtaining module is further configured to obtain holding state information of the user holding the foldable device;
the determining module is further configured to determine the target display area in the first display area and the second display area according to the holding state information.
17. The apparatus of claim 16, wherein the determining module is further configured to determine a thumb-pressed display area in the first display area and the second display area, the display area being determined as the target display area; or, determining a display area with a large pressing pressure in the first display area and the second display area, and determining the display area as the target display area; or, in the first display area and the second display area, determining a display area which is pressed by a thumb at a specified position and has a pressing pressure greater than a threshold value, and determining the display area as the target display area; or, in the first display area and the second display area, the display area with the thumb pressed on the side close to the user and the rest fingers pressed on the side far from the user is determined, and the target display area is determined by the display area.
18. The apparatus according to claim 11, wherein the obtaining module is further configured to obtain the number of faces on one side of the first display area and obtain the number of faces on one side of the second display area;
the determining module is further configured to determine a side display area with the non-zero face number as the target display area when the face number on one side is zero and the face number on the other side is non-zero; or when the number of the human faces on the two sides is not zero and the number of the human faces on one side is smaller than that of the human faces on the other side, determining the display area on one side with the small number of the human faces as the target display area.
19. The apparatus according to claim 18, wherein the execution module is further configured to adjust a current imaging parameter to a pre-imaging parameter when a side display area where the number of faces is not zero is determined as the target display area; and when the side display area with the small number of the human faces is determined as the target display area, adjusting the current shooting parameters to be rear shooting parameters.
20. The apparatus of claim 11, wherein the execution module is further configured to illuminate the target display area; or, the target display area is lighted, and a lighting cancellation operation is performed on another display area except the target display area.
21. An operation execution apparatus based on a display region, the apparatus being applied to a foldable device whose display screen includes a first display region and a second display region demarcated by a rotation axis, the apparatus comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to: when the display screen is in a folded state, acquiring a target trigger event; determining a target display area in the first display area and the second display area according to the target trigger event, wherein the target display area is a display area expected to be used by a user; performing a specific control operation based on the target display area.
22. A storage medium having stored thereon computer program instructions, which when executed by a processor, implement the display area based operation execution method of any one of claims 1 to 10.
CN201811456594.1A 2018-11-30 2018-11-30 Operation execution method and device based on display area and storage medium Pending CN111258480A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811456594.1A CN111258480A (en) 2018-11-30 2018-11-30 Operation execution method and device based on display area and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811456594.1A CN111258480A (en) 2018-11-30 2018-11-30 Operation execution method and device based on display area and storage medium

Publications (1)

Publication Number Publication Date
CN111258480A true CN111258480A (en) 2020-06-09

Family

ID=70955236

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811456594.1A Pending CN111258480A (en) 2018-11-30 2018-11-30 Operation execution method and device based on display area and storage medium

Country Status (1)

Country Link
CN (1) CN111258480A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220083189A1 (en) * 2019-05-24 2022-03-17 Vivo Mobile Communication Co., Ltd. Display method and terminal device

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100185981A1 (en) * 2009-01-21 2010-07-22 Nintendo Co., Ltd. Display controlling program and display controlling apparatus
CN103365393A (en) * 2012-03-27 2013-10-23 联想(北京)有限公司 Display method and electronic device
CN105404466A (en) * 2015-12-11 2016-03-16 联想(北京)有限公司 Electronic device
CN106488130A (en) * 2016-11-15 2017-03-08 上海斐讯数据通信技术有限公司 A kind of screening-mode changing method and its switched system
CN107273016A (en) * 2017-05-27 2017-10-20 青岛海信移动通信技术股份有限公司 The screen awakening method and device of double screen terminal
CN107770312A (en) * 2017-11-07 2018-03-06 广东欧珀移动通信有限公司 Method for information display, device and terminal
CN107766022A (en) * 2017-10-19 2018-03-06 广东欧珀移动通信有限公司 A kind of picture display process, device and storage medium
CN107831999A (en) * 2017-11-07 2018-03-23 广东欧珀移动通信有限公司 Screen control method, device and terminal

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100185981A1 (en) * 2009-01-21 2010-07-22 Nintendo Co., Ltd. Display controlling program and display controlling apparatus
CN103365393A (en) * 2012-03-27 2013-10-23 联想(北京)有限公司 Display method and electronic device
CN105404466A (en) * 2015-12-11 2016-03-16 联想(北京)有限公司 Electronic device
CN106488130A (en) * 2016-11-15 2017-03-08 上海斐讯数据通信技术有限公司 A kind of screening-mode changing method and its switched system
CN107273016A (en) * 2017-05-27 2017-10-20 青岛海信移动通信技术股份有限公司 The screen awakening method and device of double screen terminal
CN107766022A (en) * 2017-10-19 2018-03-06 广东欧珀移动通信有限公司 A kind of picture display process, device and storage medium
CN107770312A (en) * 2017-11-07 2018-03-06 广东欧珀移动通信有限公司 Method for information display, device and terminal
CN107831999A (en) * 2017-11-07 2018-03-23 广东欧珀移动通信有限公司 Screen control method, device and terminal

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220083189A1 (en) * 2019-05-24 2022-03-17 Vivo Mobile Communication Co., Ltd. Display method and terminal device
US11625164B2 (en) * 2019-05-24 2023-04-11 Vivo Mobile Communication Co., Ltd. Display method and terminal device

Similar Documents

Publication Publication Date Title
RU2630189C1 (en) Method of controlling button functions in one-hand operation mode, device and electronic device
US20170031557A1 (en) Method and apparatus for adjusting shooting function
US20170060320A1 (en) Method for controlling a mobile terminal using a side touch panel
EP3249514A1 (en) Method and device for determining operation mode of terminal
EP3299946B1 (en) Method and device for switching environment picture
CN111600998A (en) Display screen control method and device, terminal equipment and storage medium
EP3176686A1 (en) Method and device for operating user interface object
JP2016139947A (en) Portable terminal
US10042328B2 (en) Alarm setting method and apparatus, and storage medium
CN107885418B (en) Terminal, split screen display method and device
US10705729B2 (en) Touch control method and apparatus for function key, and storage medium
CN111752465A (en) Method, device and storage medium for preventing edge false touch
CN104216525A (en) Method and device for mode control of camera application
CN113539192A (en) Ambient light detection method and apparatus, electronic device, and storage medium
CN112905136A (en) Screen projection control method and device and storage medium
US11062640B2 (en) Screen display method and screen display device
CN112230827B (en) Interactive interface switching method and device and electronic equipment
EP3629560A1 (en) Full screen terminal, and operation control method and device based on full screen terminal
CN111258480A (en) Operation execution method and device based on display area and storage medium
EP3731078A1 (en) Method and apparatus for responding to gesture operation and storage medium
CN110381213B (en) Screen display method and device, mobile terminal and storage medium
CN111506207B (en) Method and device for determining lighting area during on-screen fingerprint identification
CN111262989B (en) Notification reminding method and device and storage medium
CN111756985A (en) Image shooting method, device and storage medium
CN112905027A (en) Timing method and device, mobile terminal and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200609

RJ01 Rejection of invention patent application after publication