CN107888769B - Scene mode starting method and device and terminal - Google Patents

Scene mode starting method and device and terminal Download PDF

Info

Publication number
CN107888769B
CN107888769B CN201711081550.0A CN201711081550A CN107888769B CN 107888769 B CN107888769 B CN 107888769B CN 201711081550 A CN201711081550 A CN 201711081550A CN 107888769 B CN107888769 B CN 107888769B
Authority
CN
China
Prior art keywords
terminal
preset
posture
body posture
starting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711081550.0A
Other languages
Chinese (zh)
Other versions
CN107888769A (en
Inventor
舒茂非
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201711081550.0A priority Critical patent/CN107888769B/en
Publication of CN107888769A publication Critical patent/CN107888769A/en
Application granted granted Critical
Publication of CN107888769B publication Critical patent/CN107888769B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72448User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions
    • H04M1/72454User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions according to context-related or environment-related conditions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/02Constructional features of telephone sets
    • H04M1/0202Portable telephone sets, e.g. cordless phones, mobile phones or bar type handsets
    • H04M1/026Details of the structure or mounting of specific components
    • H04M1/0266Details of the structure or mounting of specific components for a display module assembly
    • H04M1/0268Details of the structure or mounting of specific components for a display module assembly including a flexible display panel
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72448User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions
    • H04M1/72457User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions according to geographic location
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72484User interfaces specially adapted for cordless or mobile telephones wherein functions are triggered by incoming communication events

Abstract

The application discloses a scene mode starting method, a scene mode starting device and a terminal, wherein the method is applied to the terminal with a folding display screen, and comprises the following steps: acquiring the folding angle of the folding display screen; when the folding angle belongs to a preset angle range, determining whether the body posture of the terminal accords with a preset body posture; when the body posture of the terminal accords with the preset body posture, detecting whether a starting condition of a scene mode is met; and when the starting condition of the scene mode is met, starting the scene mode. By judging the folding angle and the body posture, when the folding angle belongs to the preset angle range and the body posture accords with the preset body posture, whether the starting condition of the scene mode is met is decided, the user intention can be accurately identified by synthesizing the folding angle, the body posture and the starting condition, and then the starting accuracy in the process of automatically starting the scene mode is improved.

Description

Scene mode starting method and device and terminal
Technical Field
The embodiment of the application relates to the field of human-computer interaction, in particular to a scene mode starting method, a scene mode starting device and a terminal.
Background
The smart phone is provided with different scene modes, and the scene modes comprise: at least one of a conference mode, a do-not-disturb mode, and a mute mode.
In the related art, the smart phone automatically starts the preset mode according to the time information and/or the location information. Such as: at night 11: 00-early morning 6:00 enter do-not-disturb mode. For another example: when the current geographical location is a meeting room, the meeting mode is automatically entered.
Since the switching condition only considers the time information and/or the location information, the current scene mode starting method has insufficient starting accuracy, and a certain scene mode is often started by mistake when the user does not need the method.
Disclosure of Invention
The embodiment of the application provides a scene mode starting method, a scene mode starting device and a terminal, and can solve the problem that the starting accuracy of the conventional scene mode starting method is not enough. The technical scheme is as follows:
according to a first aspect of the present application, there is provided a scene mode starting method applied in a terminal having a foldable display screen, the method including:
acquiring the folding angle of the folding display screen;
when the folding angle belongs to a preset angle range, determining whether the body posture of the terminal accords with a preset body posture;
when the body posture of the terminal accords with the preset body posture, detecting whether a starting condition of a scene mode is met;
and when the starting condition of the scene mode is met, starting the scene mode.
According to a second aspect of the present application, there is provided a scene mode activation device having a foldable display screen, the device comprising:
an angle determination module configured to obtain a folding angle of the folding display screen;
a posture determination module configured to determine whether a body posture of the apparatus conforms to a preset body posture when the folding angle belongs to a preset angle range;
a condition judgment module configured to detect whether a starting condition of a scene mode is satisfied when a body posture of the apparatus conforms to the preset body posture;
a mode starting module configured to start the scene mode when a starting condition of the scene mode is satisfied.
According to a third aspect of the present application, there is provided a terminal comprising a processor and a memory, the terminal having a foldable display screen, the memory having at least one instruction stored therein, the instruction being loaded and executed by the processor to implement the scene mode starting method according to the first aspect.
According to a fourth aspect of the present application, there is provided a computer-readable storage medium having at least one instruction stored therein, the instruction being loaded and executed by a processor to implement the scene mode startup method as described in the first aspect above.
The technical scheme provided by the embodiment of the application has the following beneficial effects:
when the user uses the terminal with the folding display screen, the user is used to put the folding display screen on a plane in a half-open state and then enter certain scene modes. Therefore, the terminal judges the folding angle and the body posture, when the folding angle belongs to the preset angle range and the body posture accords with the preset body posture, namely when the folding display screen is in a half-open state, the terminal decides whether the starting condition of the scene mode is met or not, can synthesize the folding angle, the body posture and the starting condition to accurately identify the user intention, and further improves the starting accuracy when the scene mode is automatically started.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a block diagram of a terminal according to an exemplary embodiment of the present application;
fig. 2 is a block diagram of a terminal according to another exemplary embodiment of the present application;
FIG. 3 is a schematic diagram of a handset provided in an exemplary embodiment of the present application;
FIG. 4 is a schematic diagram of a handset provided in accordance with another exemplary embodiment of the present application;
FIG. 5 is a schematic diagram of a handset provided in accordance with another exemplary embodiment of the present application;
FIG. 6 is a schematic diagram of a handset provided in accordance with another exemplary embodiment of the present application;
FIG. 7 is a flowchart of a method for initiating a scene mode provided by an exemplary embodiment of the present application;
FIG. 8 is a schematic diagram of a terminal in a vertical position according to an exemplary embodiment of the present application;
FIG. 9 is a schematic diagram of a terminal in an A-shaped pose as provided by an exemplary embodiment of the present application;
FIG. 10 is a flowchart of a method for initiating a scene mode provided by another exemplary embodiment of the present application;
FIG. 11 is a flowchart of a method for initiating a scene mode according to another exemplary embodiment of the present application;
FIG. 12 is a schematic illustration of a coordinate system of a fuselage provided by an exemplary embodiment of the present application;
FIG. 13 is a schematic diagram illustrating a scenario mode initiation method according to an exemplary embodiment of the present application;
FIG. 14 is a flowchart illustrating portions of steps of a method for initiating a scene mode according to an exemplary embodiment of the present application;
FIG. 15 is a block diagram of a scene mode activation device provided in an exemplary embodiment of the present application;
fig. 16 is a block diagram of a terminal provided in an exemplary embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
Reference herein to a "module" generally refers to a program or instructions stored in memory that is capable of performing certain functions; reference herein to "a unit" generally refers to a logically partitioned functional structure, and the "unit" may be implemented by pure hardware or a combination of hardware and software.
Reference herein to "a plurality" means two or more. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
Referring to fig. 1 and 2, a block diagram of a terminal 100 according to an exemplary embodiment of the present application is shown. The terminal 100 may be a smart phone, a tablet computer, an e-book, and the like. The terminal 100 in the present application may include one or more of the following components: a processor 110, a memory 120, and a touch display screen 130.
Processor 110 may include one or more processing cores. The processor 110 connects various parts within the overall terminal 100 using various interfaces and lines, and performs various functions of the terminal 100 and processes data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 120 and calling data stored in the memory 120. Alternatively, the processor 110 may be implemented in hardware using at least one of Digital Signal Processing (DSP), Field-Programmable Gate Array (FPGA), and Programmable Logic Array (PLA). The processor 110 may integrate one or more of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a modem, and the like. Wherein, the CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing the content to be displayed by the touch display screen 130; the modem is used to handle wireless communications. It is understood that the modem may not be integrated into the processor 110, but may be implemented by a single chip.
The Memory 120 may include a Random Access Memory (RAM) or a Read-Only Memory (Read-Only Memory). Optionally, the memory 120 includes a non-transitory computer-readable medium. The memory 120 may be used to store instructions, programs, code sets, or instruction sets. The memory 120 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing various method embodiments described below, and the like; the storage data area may store data (such as audio data, a phonebook) created according to the use of the terminal 100, and the like.
Taking an operating system as an Android (Android) system as an example, programs and data stored in the memory 120 are shown in fig. 1, and a Linux kernel layer 220, a system runtime layer 240, an application framework layer 260, and an application layer 280 are stored in the memory 120. The Linux kernel layer 220 provides underlying drivers for various hardware of the terminal 100, such as a display driver, an audio driver, a camera driver, a bluetooth driver, a Wi-Fi driver, power management, and the like. The system runtime library layer 240 provides the main feature support for the Android system through some C/C + + libraries. For example, the SQLite library provides support for a database, the OpenGL/ES library provides support for 3D drawing, the Webkit library provides support for a browser kernel, and the like. Also provided in the system Runtime layer 240 is an Android Runtime library (Android Runtime), which mainly provides some core libraries that can allow developers to write Android applications using the Java language. The application framework layer 260 provides various APIs that may be used in building applications, and developers may build their own applications by using these APIs, such as activity management, window management, view management, notification management, content provider, package management, session management, resource management, and location management. At least one application program runs in the application layer 280, and the application programs may be a contact program, a short message program, a clock program, a camera application, etc. of the operating system; or an application program developed by a third-party developer, such as an instant messaging program, a photo beautification program, and the like.
Taking an operating system as an IOS system as an example, programs and data stored in the memory 120 are shown in fig. 2, and the IOS system includes: a Core operating system Layer 320(Core OS Layer), a Core Services Layer 340(Core Services Layer), a Media Layer 360(Media Layer), and a touchable Layer 380(Cocoa Touch Layer). The kernel operating system layer 320 includes an operating system kernel, drivers, and underlying program frameworks that provide functionality closer to hardware for use by program frameworks located in the kernel services layer 340. The core services layer 340 provides system services and/or program frameworks, such as a Foundation framework, an account framework, an advertisement framework, a data storage framework, a network connection framework, a geographic location framework, a motion framework, and so forth, that are needed by the application. The media layer 360 provides audiovisual interfaces for applications, such as graphics-related interfaces, audio-related interfaces, video-related interfaces, and audio/video transmission technology wireless broadcast (AirPlay) interfaces. The touchable layer 380 provides various common interface-related frameworks for application development, and the touchable layer 380 is responsible for user touch interaction operations on the terminal 100. Such as a local notification service, a remote push service, an advertising framework, a game tool framework, a messaging User Interface (UI) framework, a User Interface UIKit framework, a map framework, and so forth.
In the framework shown in FIG. 2, the framework associated with most applications includes, but is not limited to: a base framework in the core services layer 340 and a UIKit framework in the touchable layer 380. The base framework provides many basic object classes and data types, provides the most basic system services for all applications, and is UI independent. While the class provided by the UIKit framework is a basic library of UI classes for creating touch-based user interfaces, iOS applications can provide UIs based on the UIKit framework, so it provides an infrastructure for applications for building user interfaces, drawing, processing and user interaction events, responding to gestures, and the like.
The touch display screen 130 is used for receiving a touch operation of a user on or near the touch display screen using any suitable object such as a finger, a touch pen, or the like, and displaying a user interface of each application program. The touch display 130 is generally provided at a front panel of the terminal 100.
As shown in fig. 3, the terminal 100 includes a first housing 41, a second housing 42, and a connecting assembly 43 connected between the first housing 41 and the second housing 42, and the first housing 41 and the second housing 42 are folded upside down by the connecting assembly 43.
The first housing 41 includes a first supporting surface connected to the back surface of the touch display screen and a first back surface opposite to the first supporting surface, and the second housing 42 includes a second supporting surface connected to the back surface of the touch display screen and a second back surface opposite to the second supporting surface. Accordingly, the touch display screen includes a first screen area 131, a second screen area 132 and a third display area 133, wherein the first screen area 131 corresponds to the position of the first housing 41, the second screen area 132 corresponds to the position of the second housing 42, and the third display area 133 corresponds to the position of the connecting assembly 43. In one implementation, the first screen region 131, the second screen region 132 and the third display region 133 are made of flexible materials and have a certain expansion ductility; in another implementation, only the third display region 133 is made of a flexible material, and the first and second screen regions 131 and 132 are made of a non-flexible material.
In an alternative implementation, the connection assembly 43 of the terminal 100 is of a manual construction. When the user manually separates the first casing 41 and the second casing 42, the terminal 100 is changed from the folded state to the unfolded state; when the user manually closes the first housing 41 and the second housing 42, the terminal 100 changes from the unfolded state to the folded state.
In an alternative implementation, the connection assembly 43 of the terminal 100 is of an electric construction, for example, the connection assembly 43 is provided with an electric rotating member such as an electric motor. Under the driving of the electric rotating component, the first housing 41 and the second housing 42 automatically close or separate, so that the terminal 100 has two states of unfolding and folding.
The terminal 100 may be divided into an outer folder terminal and an inner folder terminal according to whether the touch display screen is exposed in a folded state. Wherein:
external folding screen terminal
The outer folding screen terminal is a terminal which has a folding angle of 180 degrees and is completely exposed out of the touch display screen in a folding state. As shown in fig. 3, the terminal 100 is an outer folder screen terminal. In the unfolded state, the first supporting surface of the first casing 41 of the terminal 100 is flush with the second supporting surface of the second casing 42 (i.e. the included angle is 180 °), and the first screen area 131, the second screen area 132 and the third display area 133 of the touch display screen are located on the same plane; in the process of the terminal 100 changing from the unfolded state to the folded state, as shown in fig. 3, the first back surface of the first housing 41 and the second back surface of the second housing 42 are close to each other, and the included angle between the first supporting surface and the second supporting surface is changed from 180 ° to 0 °; in the folded state, as shown in fig. 4, the first supporting surface of the first casing 41 of the terminal 100 is parallel to the second supporting surface of the second casing 42 (the included angle between the first casing 41 and the second casing 42 is 0 °), so that the touch display screen is in a U-shaped folded state, in which the third display area 133 of the touch display screen forms an exposed U-shaped arc surface.
In an alternative implementation, in the folded state, all or part of the display area of the touch display screen is used for displaying the user interface. For example, as shown in fig. 4, in the folded state, only the second screen region 132 is used to display the user interface, or only the third display region 133 is used to display the user interface.
Terminal with internal folding screen
The inner-folding-screen terminal refers to a terminal that can be folded at an angle of 180 ° and in a folded state, the touch display screen is (entirely or partially) converged. As shown in fig. 5, the terminal 100 is an inner folder screen terminal. In the unfolded state, the first supporting surface of the first casing 41 of the terminal 100 is flush with the second supporting surface of the second casing 42 (i.e. the included angle is 180 °), so that the touch display screen is in a planar unfolded state (the first screen area 131, the second screen area 132 and the third display area 133 are located on the same plane); in the process of the terminal 100 changing from the unfolded state to the folded state, as shown in fig. 5, the first supporting surface of the first housing 41 and the second supporting surface of the second housing 42 are close to each other, that is, the included angle between the first supporting surface and the second supporting surface is changed from 180 ° to 0 °; in the folded state, the first supporting surface of the first housing 41 of the terminal 100 is parallel to the second supporting surface of the second housing 42, so that the touch display screen is in a U-shaped folded state in which the third display area 133 of the touch display screen forms a U-shaped arc surface that is inwardly converged. In an alternative implementation, in the folded state, no user interface is displayed in all display areas of the touch display screen.
In addition to providing a touch display screen on the support surface of the housing, a touch display screen may also be provided on the first back surface of the first housing 41 and/or the second back surface of the second housing 42. When the inner folding screen terminal is in a folding state, the touch display screen arranged on the back face of the shell is used for displaying a user interface, and the user interface is the same as or different from the user interface displayed by the touch display screen in an unfolding state.
In other possible implementations, the foldable angle of the terminal 100 may also be 360 ° (both inward and outward), and in the folded state, the touch display screen is exposed or converged on the terminal, which is not limited in this embodiment.
In the terminal 100 shown in fig. 3 to 5, the first housing 41 and the second housing 42 have the same or similar size, and the folding manner of the terminal 100 is called symmetrical folding. In other possible implementations, the terminal 100 may also be folded asymmetrically. When the asymmetric folding is adopted, the sizes of the first casing 41 and the second casing 42 may be different or different by more than a threshold (for example, 50% or 60% or 70%), and accordingly, the area of the first screen region 131 and the area of the second screen region 132 in the touch display screen are different by more than a threshold.
Illustratively, as shown in fig. 6, the terminal 100 is an asymmetrically folded outer folded screen terminal, and the first housing 41 has a size larger than that of the second housing 42. In the folded state, the area of the first screen region 131 is larger than the area of the second screen region 132.
In fig. 3 to 6, only by taking an example that the terminal 100 includes two parts of a housing and a connecting assembly for connecting the housing as an example (the terminal is in a two-fold structure), in other possible implementations, the terminal 100 may include n parts of the housing and n-1 connecting assemblies, and accordingly, the touch display screen of the terminal 100 includes 2n-1 display areas, and the n-1 display areas corresponding to the connecting assemblies are made of a flexible material, so as to implement the terminal in the n-fold structure, which is not limited in this embodiment.
At least one other component is also provided in the terminal 100, the at least one other component including: cameras, fingerprint sensors, proximity light sensors, distance sensors, etc. In some embodiments, at least one other component is disposed on the front, side, or back of the terminal 100, such as a fingerprint sensor disposed on the back or side of the housing, a camera disposed above the touch display screen 130.
In other embodiments, at least one other component may be integrated within or below the touch screen display 130. In some embodiments, a bone conduction earpiece is provided inside the terminal 100; other components on the front panel of the conventional terminal are integrated in the whole area or a partial area of the touch display screen 130, for example, after a photosensitive element in the camera is split into a plurality of photosensitive pixels, each photosensitive pixel is integrated in a black area in each display pixel in the touch display screen 130, so that the touch display screen 130 has an image capturing function. The terminal 100 has a higher screen occupation ratio due to the integration of at least one other component within or under the touch display screen 130.
In some optional embodiments, an edge touch sensor is disposed on a single side, or two sides (e.g., two left and right sides), or four sides (e.g., four upper, lower, left and right sides) of the middle frame of the terminal 100, and is configured to detect at least one of a touch operation, a click operation, a press operation, a slide operation, and the like of a user on the middle frame. The edge touch sensor may be any one of a touch sensor, a thermal sensor, a pressure sensor, and the like. The user may apply operations on the edge touch sensor to control the application in the terminal 100.
In addition, those skilled in the art will appreciate that the configuration of terminal 100 as illustrated in the above-described figures is not intended to be limiting of terminal 100, and that terminals may include more or less components than those illustrated, or some components may be combined, or a different arrangement of components. For example, the terminal 100 further includes a radio frequency circuit, an input unit, a sensor, an audio circuit, a Wireless Fidelity (WiFi) module, a power supply, a bluetooth module, and other components, which are not described herein again.
Fig. 7 is a flowchart of a scene mode starting method according to an exemplary embodiment of the present application. The scene mode starting method may be applied to any one of the terminals shown in fig. 1 to 6, and includes:
step 701, acquiring a folding angle of a folding display screen;
the folding angle refers to an angle between the first screen region and the second screen region.
Step 702, when the folding angle belongs to a preset angle range, determining whether the body posture of the terminal accords with a preset body posture;
the preset angle range is an angle range indicating that the folding display screen is in a half-open state. That is, the preset angle range is an angle range when the folding display screen is between the U-shaped folded state and the unfolded state. Alternatively, the preset angle range may be 225 degrees to 345 degrees.
When the user uses the terminal with the folding display screen, the user is used to put the folding display screen on a plane in a half-open state and then enter certain scene modes. When the folding angle belongs to the preset angle range, the terminal continuously detects whether the body posture accords with the preset body posture.
Optionally, the preset body posture is a posture in which the foldable display screen of the terminal is placed on the placing plane in a half-open state.
703, detecting whether a starting condition of a scene mode is met or not when the body posture of the terminal accords with a preset body posture;
scene modes include, but are not limited to: a vehicle navigation scene, an intelligent home control scene, an office scene and an entertainment scene. Each scene mode can correspond to respective starting conditions, and the starting conditions corresponding to different scene modes are different.
And step 704, when the starting condition of the scene mode is met, starting the scene mode.
In summary, according to the scene mode starting method provided by this embodiment, the folding angle and the body posture are determined, and when the folding angle belongs to the preset angle range and the body posture meets the preset body posture, whether the starting condition of the scene mode is met is determined, so that the user intention can be accurately identified by integrating the folding angle, the body posture and the starting condition, and further the starting accuracy when the scene mode is automatically started is improved.
When a user uses the external folding screen terminal, the terminal is usually placed in two postures: a vertical placing posture and an A-shaped placing posture.
The upright pose is a position in which the fold line of the folded display screen is perpendicular to the pose plane, as shown in fig. 8. Under the vertical placing posture, the side edge of the folding display screen is parallel to the placing plane. In the upright position, the user can view and control the screen area directly in front of the user without having to hold the terminal.
The a-shaped display position is a position in which the folding line of the foldable display screen is parallel to the display plane, and the planes of the top and bottom edges of the foldable display screen are parallel to the display plane, as shown in fig. 9. Under the posture is put to A shape, the user can look over and control the screen area towards the dead ahead to start the camera on the screen area towards the rear or around and shoot.
The vertical placement posture can be recognized by an acceleration sensor in the terminal, please refer to the embodiment shown in fig. 10 below; the a-shaped pose can be identified by an acceleration sensor and/or a light sensor in the terminal, as shown in fig. 11 below.
Fig. 10 is a flowchart of a scene mode starting method according to an exemplary embodiment of the present application. The scene mode starting method may be applied to any one of the terminals shown in fig. 1 to 6, and includes:
1001, acquiring a folding angle of a folding display screen;
taking the terminal as an example, the foldable display screen has a first screen area and a second screen area, and the folding angle is an angle between the first screen area and the second screen area. The folding display screen can be in a U-shaped folding state and an unfolding state. The unfolding state of the folding display screen is a state that the folding angle between the first screen area and the second screen area is 180 degrees; the U-shaped folding state of the folding display screen refers to a state that the first screen area and the second screen area are bent outwards, are parallel to each other and are attached to the back.
Alternatively, when the folding display screen is in an intermediate state between the U-shaped folded state and the unfolded state, it is referred to as a half-open state.
Optionally, an angle sensor is arranged on the connecting assembly of the folding display screen, and the folding angle can be acquired through the angle sensor. And reporting the folding angle to a processor by the angle sensor.
Step 1002, when the folding angle belongs to a preset angle range, acquiring a gravity acceleration signal through an acceleration sensor;
the preset angle range is an angle range indicating that the folding display screen is in a half-open state. That is, the preset angle range is an angle range when the folding display screen is between the U-shaped folded state and the unfolded state. Alternatively, the preset angle range may be 225 degrees to 345 degrees. If the folding angle is measured by an angle between two back surfaces of the first screen region and the second screen region, the predetermined angle range may be 45 degrees to 135 degrees.
Optionally, when the folding angle belongs to a preset angle range, the terminal collects a gravity acceleration signal through an acceleration sensor.
When a first acceleration sensor is arranged in a first shell of the terminal, but an acceleration sensor is not arranged in a second shell, the terminal collects a gravity acceleration signal through the first acceleration sensor.
When the first shell of the terminal is not provided with the acceleration sensor, but the second shell is provided with the second acceleration sensor, the terminal collects the gravity acceleration signal through the second acceleration sensor.
When a first acceleration sensor is arranged in a first shell of the terminal and a second acceleration sensor is arranged in a second shell, the terminal collects gravity acceleration signals through at least one of the first acceleration sensor and the second acceleration sensor.
Step 1003, determining whether the body posture of the terminal accords with a preset vertical placing posture according to the direction of the gravity acceleration signal, wherein the vertical placing posture is a posture that the direction of the gravity acceleration signal is parallel to a folding line of the folding display screen.
After the terminal acquires the direction of the gravity acceleration signal, whether the direction of the gravity acceleration signal is parallel to a folding line of the folding display screen is detected. When the direction of the gravity acceleration signal is parallel to the folding line of the folding display screen, determining that the terminal is in a vertical placing posture, and entering step 1004; when the direction of the gravity acceleration signal is not parallel to the folding line of the folding display screen, it is determined that the terminal is not in the vertical placement posture, and the step 1006 is entered, where the terminal maintains the current working state, or the default working mode is entered.
Optionally, the acceleration sensor corresponds to a three-dimensional spatial coordinate system, and the spatial coordinate system has three coordinate axes. There is a coordinate axis parallel to the folding line of the folded display, e.g. the Z-axis in the XYZ-axes is parallel to the folding line of the folded display. The terminal can detect whether the direction of the gravity acceleration signal is coincident with the Z axis or not; when the direction of the gravity acceleration signal is coincident with the Z axis, determining that the direction of the gravity acceleration signal is parallel to a folding line of the folding display screen; and when the direction of the gravity acceleration signal is not coincident with the Z axis, determining that the direction of the gravity acceleration signal is not parallel to the folding line of the folding display screen.
Alternatively, the "parallel" is not necessarily absolute parallel, and the terminal may be preset with an error range, and the two are considered to be parallel as long as the included angle between the direction of the gravitational acceleration sensor and the folding line is smaller than the error range.
Step 1004, when the body posture of the terminal accords with the vertical body posture, detecting whether the starting condition of the scene mode is met;
scene modes include, but are not limited to: the system comprises a vehicle navigation scene, an intelligent home control scene, a conference scene and the like. Each scene mode can correspond to respective starting conditions, and the starting conditions corresponding to different scene modes are different. The starting conditions include: at least one of a condition related to a Bluetooth connection, a condition related to WiFi (Wireless Fidelity), a condition related to time, and a condition related to geographical location.
Illustratively, the starting conditions of the vehicle navigation scene include: and establishing Bluetooth connection with the vehicle-mounted central control equipment.
After a user enters the automobile, the terminal can automatically establish Bluetooth connection with the vehicle-mounted central control equipment. In the process of establishing the Bluetooth connection, the terminal acquires the equipment identifier of the opposite terminal equipment and identifies whether the equipment identifier is the equipment identifier of the vehicle-mounted central control equipment. When the device identifier is the device identifier of the vehicle-mounted central control device, determining to establish bluetooth connection with the vehicle-mounted central control device, and entering step 1005; when the device identifier is the device identifier of the non-vehicle central control device, it is determined that the bluetooth connection is not established with the vehicle central control device, and step 1006 is performed, in which the current working state is maintained or a preset working mode is performed.
Illustratively, the starting conditions of the smart home control scene include: the current geographic location is at a preset dwell location and/or is connected to a wireless network of the preset dwell location.
The user can set up one or more intelligent house equipment in own family, for example intelligence robot of sweeping the floor, intelligent electric light, intelligent audio amplifier, intelligent (window) curtain, intelligent air conditioner etc.. The intelligent home devices are connected to the intelligent gateway device, and the terminal has the capability of managing the intelligent home devices after being connected to the WiFi network of the intelligent gateway device.
The terminal can perform positioning once every preset time interval to acquire the current geographic position of the terminal. The terminal detects whether the current geographic position is in a preset living position; when the current geographical position is in the preset living position, determining that the starting condition of the intelligent home control scene is met, and entering step 1005; and when the current geographic position is not at the preset residential position, determining that the starting condition of the intelligent home control scene is not met, and entering step 1006 to keep the current working state or enter a preset working mode.
The terminal can also acquire the network identifier of the WiFi network when establishing the WiFi connection. The terminal detects whether the identification of the WiFi network is a wireless network of a preset living position. When the identification of the WiFi network is a wireless network of a preset living position, determining that the starting condition of the intelligent home control scene is met, and entering step 1005; and when the identification of the WiFi network is not the wireless network of the preset living position, determining that the starting condition of the intelligent home control scene is not met.
The terminal may also execute the above two logics simultaneously, that is, when the current geographic location is at the preset residential location and is connected to the wireless network at the preset residential location, it is determined that the start condition of the smart home control scene is satisfied, and step 1005 is performed.
Illustratively, the starting conditions of the conference scene include: the current geographic location is at a meeting room location and/or a wireless network connected to the meeting room location.
The terminal can perform positioning once every preset time interval to acquire the current geographic position of the terminal. The terminal detects whether the current geographical position is in a meeting room position; when the current geographical position is in the meeting room position, determining that the starting condition of the meeting scene is met, and entering step 1005; when the current geographical location is not in the meeting room location, it is determined that the starting condition of the meeting scene is not met, and step 1006 is entered, and the current working state is maintained or a preset working mode is entered.
The terminal can also acquire the network identifier of the WiFi network when establishing the WiFi connection. The terminal detects whether the identifier of the WiFi network is a wireless network of the meeting room position. When the identifier of the WiFi network is a wireless network of the meeting room location, determining that the starting condition of the meeting scene is satisfied, and entering step 1005; and when the identification of the WiFi network is not the wireless network of the meeting room position, determining that the starting condition of the meeting scene is not met.
The terminal may also execute the above two logics simultaneously, that is, when the current geographic location is at the meeting room location and is connected to the wireless network of the meeting room location, it is determined that the starting condition of the meeting scene is satisfied, and step 1005 is entered.
Illustratively, the starting conditions of the entertainment scene include: the current geographic location is at a meeting room location and/or a wireless network connected to the meeting room location.
The terminal can perform positioning once every preset time interval to acquire the current geographic position of the terminal. The terminal detects whether the current geographical position is in a meeting room position; when the current geographical position is in the meeting room position, determining that the starting condition of the meeting scene is met, and entering step 1005; when the current geographical location is not in the meeting room location, it is determined that the starting condition of the meeting scene is not met, and step 1006 is entered, and the current working state is maintained or a preset working mode is entered.
The terminal can also acquire the network identifier of the WiFi network when establishing the WiFi connection. The terminal detects whether the identifier of the WiFi network is a wireless network of the meeting room position. When the identifier of the WiFi network is a wireless network of the meeting room location, determining that the starting condition of the meeting scene is satisfied, and entering step 1005; and when the identification of the WiFi network is not the wireless network of the meeting room position, determining that the starting condition of the meeting scene is not met.
The terminal may also execute the above two logics simultaneously, that is, when the current geographic location is at the meeting room location and is connected to the wireless network of the meeting room location, it is determined that the starting condition of the meeting scene is satisfied, and step 1005 is entered.
Step 1005, when the starting condition of the scene mode is satisfied, starting the scene mode;
when the starting condition of the vehicle-mounted navigation scene is met, the terminal can start the navigation application program and display the user interface of the navigation application program as a foreground running user interface. The terminal can also enter a state of monitoring a voice command, and when the voice command is monitored, the terminal executes the operation corresponding to the voice command.
When the terminal is in the a-shaped placing posture, the terminal can also use a camera on (or around) a screen area facing away from the driver as a vehicle event data recorder, for example, a first camera on (or around) a first screen area and a second camera on a second screen area simultaneously acquire images, if the images acquired by the first camera include a face image of the driver and the images acquired by the second camera do not include the face image of the driver, the terminal adopts the first screen area to display the user interface of the navigation application program as a user interface running in the foreground, and adopts the second camera corresponding to the second screen area as the vehicle event data recorder.
The embodiment does not limit the specific operation form of the car navigation scene.
When the starting condition of the intelligent home control scene is met, the terminal can start an intelligent home management program; and displaying the working state and/or the control interface of the intelligent household equipment through the intelligent household management program. For example, the terminal automatically opens the intelligent home management program and simultaneously displays the working parameters of the intelligent home equipment; and when an operation instruction of the user is received, further displaying a control interface of the intelligent household equipment.
The specific operation mode of the smart home control scene is not limited in this embodiment.
When meeting the starting condition of the meeting room scene, the terminal can close the prompt tone and the vibration prompt.
Step 1006, maintaining the current working state or entering a preset working mode.
In summary, according to the scene mode starting method provided by this embodiment, the folding angle and the body posture are determined, and when the folding angle belongs to the preset angle range and the body posture meets the preset body posture, whether the starting condition of the scene mode is met is determined, so that the user intention can be accurately identified by integrating the folding angle, the body posture and the starting condition, and further the starting accuracy when the scene mode is automatically started is improved.
In the scene mode starting method provided by this embodiment, whether the body posture of the terminal conforms to the vertical placement posture is further determined according to the direction of the gravitational acceleration signal acquired by the acceleration sensor; because the calculation amount of the terminal when comparing whether the direction of the gravity acceleration signal is parallel to the folding line is small, the terminal can determine that the terminal is in the vertical placing posture with small calculation amount and high calculation speed.
Unlike the previous embodiment, the terminal may further use an acceleration sensor to determine whether the body posture is an a-shaped posture, please refer to the following embodiment.
Fig. 11 is a flowchart of a scene mode starting method according to an exemplary embodiment of the present application. The scene mode starting method may be applied to any one of the terminals shown in fig. 1 to 6, and includes:
step 1101, acquiring a folding angle of a folding display screen;
taking the terminal as an example, the foldable display screen has a first screen area and a second screen area, and the folding angle is an angle between the first screen area and the second screen area. The folding display screen can be in a U-shaped folding state and an unfolding state. The unfolding state of the folding display screen is a state that the folding angle between the first screen area and the second screen area is 180 degrees; the U-shaped folding state of the folding display screen refers to a state that the first screen area and the second screen area are bent outwards, are parallel to each other and are attached to the back.
Alternatively, when the folding display screen is in an intermediate state between the U-shaped folded state and the unfolded state, it is referred to as a half-open state.
Optionally, an angle sensor is arranged on the connecting assembly of the folding display screen, and the folding angle can be acquired through the angle sensor. And reporting the folding angle to a processor by the angle sensor.
Step 1102, when the folding angle belongs to a preset angle range, acquiring a gravity acceleration signal through an acceleration sensor;
the preset angle range is an angle range indicating that the folding display screen is in a half-open state. That is, the preset angle range is an angle range when the folding display screen is between the U-shaped folded state and the unfolded state. Alternatively, the preset angle range may be 225 degrees to 345 degrees. If the folding angle is measured by an angle between two back surfaces of the first screen region and the second screen region, the predetermined angle range may be 45 degrees to 135 degrees.
Optionally, when the folding angle belongs to a preset angle range, the terminal collects a gravity acceleration signal through an acceleration sensor.
When a first acceleration sensor is arranged in a first shell of the terminal, but an acceleration sensor is not arranged in a second shell, the terminal collects a gravity acceleration signal through the first acceleration sensor.
When the first shell of the terminal is not provided with the acceleration sensor, but the second shell is provided with the second acceleration sensor, the terminal collects the gravity acceleration signal through the second acceleration sensor.
When a first acceleration sensor is arranged in a first shell of the terminal and a second acceleration sensor is arranged in a second shell, the terminal collects gravity acceleration signals through at least one of the first acceleration sensor and the second acceleration sensor.
1103, calculating a screen included angle between the gravity acceleration signal and the folding display screen according to the component sizes of the gravity acceleration signal on different coordinate axes of the machine body coordinate system;
optionally, the acceleration sensor corresponds to a three-dimensional space coordinate system, and the three-dimensional space coordinate system has three coordinate axes: an X-axis, a Y-axis, and a Z-axis. Illustratively, when the body of the terminal is in a vertical posture, the position of the acceleration sensor is taken as an origin, the horizontal rightward direction is an X-axis positive half shaft, the vertical upward direction is a Y-axis positive half shaft, and the direction perpendicular to the XY plane and facing forward is a Z-axis positive half shaft, as shown in fig. 12. This three-dimensional space coordinate system is also referred to as the fuselage coordinate system. When the first housing and the second housing are respectively provided with acceleration sensors, the terminal may establish two body coordinate systems. The original points of the two body coordinate systems are the positions of the two acceleration sensors respectively, and when the terminal is in an unfolded state, the corresponding coordinate axes of the two body coordinate systems are parallel.
Referring to fig. 13, taking the terminal as an external folding screen terminal, and the first acceleration sensor is disposed on the first housing 132 of the terminal as an example, the first acceleration sensor may acquire a Y component of the gravitational acceleration signal on the negative half axis of the Y axis and a Z component on the negative half axis of the Z axis, and calculate a screen angle α between the gravitational acceleration signal and the first screen area of the folding display screen according to a trigonometric function relationship between the Y component and the Z component.
Or, taking the second acceleration sensor disposed on the second housing 134 of the terminal as an example, the second acceleration sensor may acquire a Y component of the gravitational acceleration signal on the positive half axis of the Y axis and a Z component on the negative half axis of the Z axis, and calculate a screen angle β between the gravitational acceleration signal and the second screen area of the foldable display screen through a trigonometric function relationship between the Y component and the Z component.
Or, taking as an example that a first acceleration sensor is disposed on the first shell 132 of the terminal and a second acceleration sensor is disposed on the second shell 134, the first acceleration sensor may acquire a Y component of the gravitational acceleration signal on the Y-axis negative half-axis and a Z component on the Z-axis negative half-axis, and calculate a screen included angle α between the gravitational acceleration signal and the first screen area of the folding display screen through a trigonometric function relationship between the Y component and the Z component; the second acceleration sensor can acquire a Y component of the gravity acceleration signal on a positive half shaft of a Y axis and a Z component of the gravity acceleration signal on a negative half shaft of a Z axis, and a screen included angle beta between the gravity acceleration signal and a second screen area of the folding display screen is calculated through a trigonometric function relation between the Y component and the Z component.
1104, determining whether the body posture of the terminal accords with a preset A-shaped placing posture or not according to the screen included angle and the folding angle;
optionally, the pose of the a-shape is: the screen angle is the attitude at which half of the folding angle is located.
When the screen included angle is half of the folding angle of the folding display screen, determining that the body posture of the terminal conforms to the preset A-shaped placing posture, and entering step 1105;
when the screen included angle is not half of the folding angle of the folding display screen, the body posture of the terminal is determined not to conform to the preset A-shaped placing posture, and the step 1107 is entered, and the terminal keeps the current working state, or enters the default working mode.
Optionally, the "screen included angle is half of the folding angle of the folding display screen" is not necessarily absolute half, and the terminal may preset an error range, and as long as the screen included angle is about half of the folding angle of the folding display screen, the body posture of the terminal is considered to conform to the preset a-shaped placing posture.
Step 1105, when the body posture of the terminal accords with the preset body posture, detecting whether the starting condition of the scene mode is satisfied;
scene modes include, but are not limited to: the system comprises a vehicle navigation scene, an intelligent home control scene, a conference scene and the like. Each scene mode can correspond to respective starting conditions, and the starting conditions corresponding to different scene modes are different. The starting conditions include: at least one of a condition related to a Bluetooth connection, a condition related to WiFi (Wireless Fidelity), a condition related to time, and a condition related to geographical location.
Illustratively, the starting conditions of the vehicle navigation scene include: and establishing Bluetooth connection with the vehicle-mounted central control equipment.
After a user enters the automobile, the terminal can automatically establish Bluetooth connection with the vehicle-mounted central control equipment. In the process of establishing the Bluetooth connection, the terminal acquires the equipment identifier of the opposite terminal equipment and identifies whether the equipment identifier is the equipment identifier of the vehicle-mounted central control equipment. When the device identifier is the device identifier of the vehicle-mounted central control device, determining to establish bluetooth connection with the vehicle-mounted central control device, and entering step 1106; when the device identifier is the device identifier of the non-vehicle central control device, it is determined that the bluetooth connection is not established with the vehicle central control device, and step 1107 is entered, where the current working state is maintained or a preset working mode is entered.
Illustratively, the starting conditions of the smart home control scene include: the current geographic location is at a preset dwell location and/or is connected to a wireless network of the preset dwell location.
The user can set up one or more intelligent house equipment in own family, for example intelligence robot of sweeping the floor, intelligent electric light, intelligent audio amplifier, intelligent (window) curtain, intelligent air conditioner etc.. The intelligent home devices are connected to the intelligent gateway device, and the terminal has the capability of managing the intelligent home devices after being connected to the WiFi network of the intelligent gateway device.
The terminal can perform positioning once every preset time interval to acquire the current geographic position of the terminal. The terminal detects whether the current geographic position is in a preset living position; when the current geographical position is in the preset living position, determining that the starting condition of the intelligent home control scene is met, and entering step 1106; when the current geographic position is not in the preset residential position, it is determined that the starting condition of the smart home control scene is not met, and the method enters step 1107 to keep the current working state or enter a preset working mode.
The terminal can also acquire the network identifier of the WiFi network when establishing the WiFi connection. The terminal detects whether the identification of the WiFi network is a wireless network of a preset living position. When the identifier of the WiFi network is a wireless network of a preset living location, determining that a starting condition of the smart home control scene is satisfied, and entering step 1107; and when the identification of the WiFi network is not the wireless network of the preset living position, determining that the starting condition of the intelligent home control scene is not met.
The terminal may also execute the above two logics simultaneously, that is, when the current geographic location is at the preset living location and is connected to the wireless network at the preset living location, it is determined that the starting condition of the smart home control scenario is satisfied, and step 1106 is performed.
Illustratively, the starting conditions of the conference scene include: the current geographic location is at a meeting room location and/or a wireless network connected to the meeting room location.
The terminal can perform positioning once every preset time interval to acquire the current geographic position of the terminal. The terminal detects whether the current geographical position is in a meeting room position; when the current geographical position is in the meeting room position, determining that the starting condition of the meeting scene is met, and entering step 1106; when the current geographical location is not in the meeting room location, it is determined that the starting condition of the meeting scene is not met, and step 1107 is entered, and the current working state is maintained or a preset working mode is entered.
The terminal can also acquire the network identifier of the WiFi network when establishing the WiFi connection. The terminal detects whether the identifier of the WiFi network is a wireless network of the meeting room position. When the identifier of the WiFi network is a wireless network of the meeting room location, determining that the starting condition of the meeting scene is satisfied, and entering step 1106; and when the identification of the WiFi network is not the wireless network of the meeting room position, determining that the starting condition of the meeting scene is not met.
The terminal may also execute the above two logics simultaneously, that is, when the current geographical location is at the meeting room location and is already connected to the wireless network of the meeting room location, it is determined that the starting condition of the meeting scenario is satisfied, and step 1106 is entered.
Illustratively, the starting conditions of the entertainment scene include: the current geographic location is at a meeting room location and/or a wireless network connected to the meeting room location.
The terminal can perform positioning once every preset time interval to acquire the current geographic position of the terminal. The terminal detects whether the current geographical position is in a meeting room position; when the current geographical position is in the meeting room position, determining that the starting condition of the meeting scene is met, and entering step 1106; when the current geographical location is not in the meeting room location, it is determined that the starting condition of the meeting scene is not met, and step 1107 is entered, and the current working state is maintained or a preset working mode is entered.
The terminal can also acquire the network identifier of the WiFi network when establishing the WiFi connection. The terminal detects whether the identifier of the WiFi network is a wireless network of the meeting room position. When the identifier of the WiFi network is a wireless network of the meeting room location, determining that the starting condition of the meeting scene is satisfied, and entering step 1106; and when the identification of the WiFi network is not the wireless network of the meeting room position, determining that the starting condition of the meeting scene is not met.
The terminal may also execute the above two logics simultaneously, that is, when the current geographical location is at the meeting room location and is already connected to the wireless network of the meeting room location, it is determined that the starting condition of the meeting scenario is satisfied, and step 1106 is entered.
Step 1106, when the starting condition of the scene mode is met, starting the scene mode;
when the starting condition of the vehicle-mounted navigation scene is met, the terminal can start the navigation application program and display the user interface of the navigation application program as a foreground running user interface. The terminal can also enter a state of monitoring a voice command, and when the voice command is monitored, the terminal executes the operation corresponding to the voice command.
When the terminal is in the a-shaped placing posture, the terminal can also use a camera on (or around) a screen area facing away from the driver as a vehicle event data recorder, for example, a first camera on (or around) a first screen area and a second camera on a second screen area simultaneously acquire images, if the images acquired by the first camera include a face image of the driver and the images acquired by the second camera do not include the face image of the driver, the terminal adopts the first screen area to display the user interface of the navigation application program as a user interface running in the foreground, and adopts the second camera corresponding to the second screen area as the vehicle event data recorder.
The embodiment does not limit the specific operation form of the car navigation scene.
When the starting condition of the intelligent home control scene is met, the terminal can start an intelligent home management program; and displaying the working state and/or the control interface of the intelligent household equipment through the intelligent household management program. For example, the terminal automatically opens the intelligent home management program and simultaneously displays the working parameters of the intelligent home equipment; and when an operation instruction of the user is received, further displaying a control interface of the intelligent household equipment.
The specific operation mode of the smart home control scene is not limited in this embodiment.
When meeting the starting condition of the meeting room scene, the terminal can close the prompt tone and the vibration prompt.
Step 1107, the current working state is maintained or a preset working mode is entered.
In summary, according to the scene mode starting method provided by this embodiment, the folding angle and the body posture are determined, and when the folding angle belongs to the preset angle range and the body posture meets the preset body posture, whether the starting condition of the scene mode is met is determined, so that the user intention can be accurately identified by integrating the folding angle, the body posture and the starting condition, and further the starting accuracy when the scene mode is automatically started is improved.
According to the scene mode starting method provided by the embodiment, the screen included angle between the gravity acceleration signal and the folding display screen is firstly identified through the component sizes of the gravity acceleration signal acquired by the acceleration sensor on different coordinate axes, and then whether the body posture of the terminal accords with the A-shaped placing posture is determined according to the screen included angle and the folding angle; the effect of identifying whether the body posture of the terminal conforms to the A-shaped placing posture or not can be realized even if the terminal is only provided with one acceleration sensor in the first shell or the second shell, the algorithm is simplified, and the calculation amount of the terminal is reduced.
As another alternative implementation, the front and back sides of the terminal are provided with a first light sensor and a second light sensor, respectively. The front side of the terminal may be the side provided with the foldable display screen, and the back side of the terminal may be the side not provided with the foldable display screen. As shown in fig. 14, the steps 1202 to 1204 can be alternatively implemented as the following two steps:
step 1402, obtaining a first light intensity value collected by the first light sensor and a second light intensity value collected by the second light sensor;
a first light sensor located at the front side of the terminal may acquire a first light intensity value and a second light sensor located at the back side of the terminal may acquire a second light intensity value.
And 1403, determining that the body posture of the terminal conforms to a preset A-shaped placing posture according to the first light intensity value and the second light intensity value.
In the a-shaped pose, the back of the terminal faces the pose plane, with the front of the terminal facing outward. The back surface of the first screen area of the terminal and the back surface of the second screen area of the terminal are shielded from each other, so that the first light intensity value is much larger than the second light intensity value in most cases.
The terminal judges whether the first light intensity value is larger than a first threshold value or not and whether the second light intensity value is smaller than a second threshold value or not. When the first light intensity value is larger than the first threshold value and the second light intensity value is smaller than the second threshold value, determining that the body posture of the terminal conforms to the preset A-shaped placing posture, and entering step 1105; when the first light intensity value is smaller than the first threshold value and/or the second light intensity value is larger than the second threshold value, it is determined that the body posture of the terminal does not conform to the preset a-shaped placing posture, and the step 1107 is entered.
Optionally, the first threshold is greater than or equal to the second threshold.
Fig. 15 shows a block diagram of a scene mode starting apparatus according to an exemplary embodiment of the present application. The scene mode activation means may be implemented as all or a part of the terminal by software, hardware or a combination of both. The device has a foldable display screen, the device comprising:
an angle determination module 1510 configured to obtain a folding angle of the folded display screen;
a posture determination module 1520 configured to determine whether a body posture of the apparatus conforms to a preset body posture when the folding angle belongs to a preset angle range;
a condition judgment module 1530 configured to detect whether a starting condition of a scene mode is satisfied when the body posture of the apparatus conforms to the preset body posture;
a mode start module 1540 configured to start the scene mode when a start condition of the scene mode is satisfied.
In an alternative embodiment, an acceleration sensor is provided on the device,
the attitude determination module 1520 configured to acquire a gravitational acceleration signal through the acceleration sensor; determining whether the body posture of the device conforms to a preset vertical placing posture or not according to the direction of the gravity acceleration signal;
and the vertical placing posture is a posture that the direction of the gravity acceleration signal is parallel to a folding line of the folding display screen.
In an alternative embodiment, an acceleration sensor is provided on the device,
the attitude determination module 1520 configured to acquire a gravitational acceleration signal through the acceleration sensor; calculating the screen included angle between the gravity acceleration signal and the folding display screen according to the component size of the gravity acceleration signal on different coordinate axes of a machine body coordinate system; determining whether the body posture of the terminal conforms to a preset A-shaped placing posture or not according to the screen included angle and the folding angle; the A-shaped placing posture is the posture where the included angle of the screen is half of the folding angle.
In an alternative embodiment, the front and back of the device are provided with a first light sensor and a second light sensor respectively,
the posture determination module 1520, configured to obtain a first light intensity value collected by the first light sensor, and a second light intensity value collected by the second light sensor; determining that the body posture of the device conforms to a preset A-shaped placing posture according to the first light intensity value and the second light intensity value;
wherein the A-shaped pose is a pose in which the first light intensity value is greater than a first threshold and the second light intensity value is less than a second threshold.
In an optional embodiment, the foldable display screen includes a first screen area and a second screen area, and the preset angle range is used for indicating that the first screen area and the second screen area are in an intermediate state between the folded state and the unfolded state.
In an optional embodiment, the condition determining module is configured to detect whether a starting condition of the vehicle navigation scene is satisfied, where the starting condition includes: and establishing Bluetooth connection with the vehicle-mounted central control equipment.
In an alternative embodiment, the mode initiation module is configured to initiate a navigation application; and/or entering a state of monitoring voice commands.
In an optional embodiment, the condition determining module is configured to detect whether a start condition of a smart home control scenario is satisfied, where the start condition includes: the current geographical location is at a preset residential location and/or is connected to a wireless network of the preset residential location.
In an optional embodiment, the mode starting module is configured to start a smart home management program; and displaying the working state and/or the control interface of the intelligent household equipment through the intelligent household management program.
Fig. 16 is a block diagram of a mobile terminal according to an exemplary embodiment of the present application, and as shown in fig. 16, the mobile terminal includes a processor 1601, a memory 1602 and a foldable display 1603, where the memory 1602 stores at least one instruction, and the instruction is loaded and executed by the processor 1601 to implement the scene mode starting method according to the above embodiments.
The embodiment of the present application further provides a computer-readable storage medium, where at least one instruction is stored, and the at least one instruction is loaded and executed by the processor to implement the scene mode starting method according to the above various embodiments.
The present application further provides a computer program product, where at least one instruction is stored, and the at least one instruction is loaded and executed by the processor to implement the scene mode starting method according to the above embodiments.
It should be noted that: the scene mode starting apparatus provided in the foregoing embodiment is only illustrated by dividing the functional modules when starting various scene modes, and in practical applications, the functions may be distributed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules to complete all or part of the functions described above. In addition, the embodiments of the program control method and the program control apparatus provided in the above embodiments belong to the same concept, and specific implementation processes thereof are described in the method embodiments in detail and are not described herein again.
The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only exemplary of the present application and should not be taken as limiting the present application, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (12)

1. A scene mode starting method is applied to a terminal with a folding display screen, and comprises the following steps:
acquiring the folding angle of the folding display screen;
when the folding angle belongs to a preset angle range, determining whether the body posture of the terminal accords with a preset body posture, wherein the preset body posture is a posture that the folding display screen is placed on a placing plane in a semi-open state, and the preset body posture comprises at least one of a vertical placing posture and an A-shaped placing posture;
when the body posture of the terminal accords with the preset body posture, detecting whether starting conditions of scene modes are met, wherein different scene modes correspond to different starting conditions, and the starting conditions comprise at least one of Bluetooth connection conditions, wireless fidelity WiFi connection conditions, time conditions and geographical position conditions;
and when the starting condition of the scene mode is met, starting the scene mode, wherein when the scene mode is a vehicle-mounted navigation scene and is in the A-shaped placing posture, a user interface of a navigation application program is displayed in a first screen area, a second camera positioned at one side of a second screen area is used as a vehicle event data recorder, the image acquired by the first camera positioned at one side of the first screen area contains the face image of the driver, and the image acquired by the second camera does not contain the face image of the driver.
2. The method according to claim 1, wherein an acceleration sensor is provided on the terminal, and the determining whether the body posture of the terminal conforms to a preset body posture comprises:
acquiring a gravity acceleration signal through the acceleration sensor;
determining whether the body posture of the terminal conforms to the preset vertical placing posture or not according to the direction of the gravity acceleration signal;
and the vertical placing posture is a posture that the direction of the gravity acceleration signal is parallel to a folding line of the folding display screen.
3. The method according to claim 1, wherein an acceleration sensor is provided on the terminal, and the determining whether the body posture of the terminal conforms to a preset body posture comprises:
acquiring a gravity acceleration signal through the acceleration sensor;
calculating the screen included angle between the gravity acceleration signal and the folding display screen according to the component size of the gravity acceleration signal on different coordinate axes of a machine body coordinate system;
determining whether the body posture of the terminal accords with the preset A-shaped placing posture or not according to the screen included angle and the folding angle;
the A-shaped placing posture is the posture where the included angle of the screen is half of the folding angle.
4. The method according to claim 1, wherein the front side and the back side of the terminal are respectively provided with a first optical sensor and a second optical sensor, and the determining whether the body posture of the terminal conforms to a preset body posture comprises:
acquiring a first light intensity value acquired by the first light sensor and a second light intensity value acquired by the second light sensor;
determining that the body posture of the terminal conforms to the preset A-shaped placing posture according to the first light intensity value and the second light intensity value;
wherein the A-shaped pose is a pose in which the first light intensity value is greater than a first threshold and the second light intensity value is less than a second threshold.
5. The method of any one of claims 1 to 4, wherein the foldable display screen comprises the first screen area and the second screen area, and the preset angle range is used for representing that the first screen area and the second screen area are in an intermediate state between a folded state and an unfolded state.
6. The method according to any one of claims 1 to 4, wherein the detecting whether the starting condition of the scene mode is satisfied comprises:
detecting whether starting conditions of the vehicle-mounted navigation scene are met, wherein the starting conditions comprise: and establishing Bluetooth connection with the vehicle-mounted central control equipment.
7. The method of claim 6, wherein said initiating the scene mode comprises:
starting the navigation application program;
and/or the presence of a gas in the gas,
and entering a state of monitoring voice commands.
8. The method according to any one of claims 1 to 4, wherein the detecting whether the starting condition of the scene mode is satisfied comprises;
whether the starting conditions of the intelligent home control scene are met or not is detected, and the starting conditions comprise: the current geographical location is at a preset residential location and/or is connected to a wireless network of the preset residential location.
9. The method of claim 8, wherein said initiating the scene mode comprises:
starting an intelligent home management program;
and displaying the working state and/or the control interface of the intelligent household equipment through the intelligent household management program.
10. A scene mode activation device, the device having a foldable display screen, the device comprising:
an angle determination module configured to obtain a folding angle of the folding display screen;
a posture determination module configured to determine whether a body posture of the apparatus conforms to a preset body posture when the folding angle belongs to a preset angle range, the preset body posture being a posture in which the folding display screen is placed on a placement plane in a half-open state, the preset body posture including at least one of a vertical placement posture and an a-shaped placement posture;
the device comprises a condition judging module, a judging module and a judging module, wherein the condition judging module is configured to detect whether starting conditions of scene modes are met or not when the body posture of the device accords with the preset body posture, different scene modes correspond to different starting conditions, and the starting conditions comprise at least one of Bluetooth connection conditions, wireless fidelity (WiFi) connection conditions, time conditions and geographical position conditions;
and the mode starting module is configured to start the scene mode when the starting condition of the scene mode is met, wherein when the scene mode is a vehicle-mounted navigation scene and is in the A-shaped placing posture, a user interface of a navigation application program is displayed in a first screen area, a second camera positioned on one side of a second screen area is used as a vehicle data recorder, the image acquired by the first camera positioned on one side of the first screen area contains a face image of a driver, and the image acquired by the second camera does not contain the face image of the driver.
11. A terminal comprising a processor and a memory, the terminal having a foldable display screen, the memory having stored therein at least one instruction that is loaded and executed by the processor to implement the scene mode activation method according to any one of claims 1 to 9.
12. A computer-readable storage medium having stored thereon at least one instruction which is loaded and executed by a processor to implement the scene mode activation method according to any one of claims 1 to 9.
CN201711081550.0A 2017-11-07 2017-11-07 Scene mode starting method and device and terminal Active CN107888769B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711081550.0A CN107888769B (en) 2017-11-07 2017-11-07 Scene mode starting method and device and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711081550.0A CN107888769B (en) 2017-11-07 2017-11-07 Scene mode starting method and device and terminal

Publications (2)

Publication Number Publication Date
CN107888769A CN107888769A (en) 2018-04-06
CN107888769B true CN107888769B (en) 2020-11-27

Family

ID=61779072

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711081550.0A Active CN107888769B (en) 2017-11-07 2017-11-07 Scene mode starting method and device and terminal

Country Status (1)

Country Link
CN (1) CN107888769B (en)

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108845773B (en) * 2018-04-28 2022-04-29 中兴通讯股份有限公司 Method and device for determining included angle between screens, storage medium and electronic device
CN108648626B (en) * 2018-05-10 2020-05-26 Oppo广东移动通信有限公司 Display screen control method and device, terminal and readable storage medium
WO2020054949A1 (en) * 2018-09-11 2020-03-19 Samsung Electronics Co., Ltd. Electronic device and method for capturing view
CN109361821A (en) * 2018-11-30 2019-02-19 维沃移动通信有限公司 A kind of method and mobile terminal controlling alarm clock
CN110069146B (en) * 2019-03-06 2022-07-01 维沃移动通信有限公司 Screen space parameter acquisition method and terminal equipment
CN110090442B (en) * 2019-05-16 2023-03-24 网易(杭州)网络有限公司 Game mirror opening method and device, electronic equipment and storage medium
CN110209275B (en) * 2019-05-27 2022-03-11 维沃移动通信有限公司 Terminal control method and terminal
CN110090441B (en) * 2019-05-31 2023-03-10 网易(杭州)网络有限公司 Game operation control method and device, storage medium and electronic equipment
CN110207643B (en) * 2019-05-31 2021-02-19 闻泰通讯股份有限公司 Folding angle detection method and device, terminal and storage medium
CN110278302B (en) * 2019-06-26 2021-05-04 昆山国显光电有限公司 Display terminal and display control method
CN112731228B (en) * 2019-10-29 2023-08-04 北京小米移动软件有限公司 Folding screen device and magnetic field detection method in folding screen device
CN110913069A (en) * 2019-11-14 2020-03-24 维沃移动通信有限公司 Call mode switching method and terminal equipment
CN111766918B (en) * 2020-06-05 2023-03-24 北京小米移动软件有限公司 Folding screen posture detection method and device, electronic equipment and computer readable storage medium
CN112230877A (en) * 2020-10-16 2021-01-15 惠州Tcl移动通信有限公司 Voice operation method and device, storage medium and electronic equipment
CN113571170A (en) * 2021-03-17 2021-10-29 深圳市倍轻松科技股份有限公司 Method and device for intelligently starting massage instrument, electronic equipment and medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102907078A (en) * 2010-05-28 2013-01-30 摩托罗拉移动有限责任公司 Smart method and device for adaptive user interface experiences
CN103399643A (en) * 2013-08-23 2013-11-20 深圳市金立通信设备有限公司 Application program starting method of flexible terminal and flexible terminal
CN103780765A (en) * 2014-02-28 2014-05-07 联想(北京)有限公司 Switching method and device as well as electronic device

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8130193B2 (en) * 2005-03-31 2012-03-06 Microsoft Corporation System and method for eyes-free interaction with a computing device through environmental awareness
CN103501383A (en) * 2013-09-17 2014-01-08 宇龙计算机通信科技(深圳)有限公司 Switching method of working modes of flexible screen and mobile terminal of flexible screen
CN104683546A (en) * 2013-11-29 2015-06-03 深圳富泰宏精密工业有限公司 Profile switching device and method
KR102119843B1 (en) * 2014-02-10 2020-06-05 삼성전자주식회사 User terminal device and method for displaying thereof
CN105653162B (en) * 2014-11-13 2019-05-17 昆山工研院新型平板显示技术中心有限公司 Flexible display device method of controlling operation thereof

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102907078A (en) * 2010-05-28 2013-01-30 摩托罗拉移动有限责任公司 Smart method and device for adaptive user interface experiences
CN103399643A (en) * 2013-08-23 2013-11-20 深圳市金立通信设备有限公司 Application program starting method of flexible terminal and flexible terminal
CN103780765A (en) * 2014-02-28 2014-05-07 联想(北京)有限公司 Switching method and device as well as electronic device

Also Published As

Publication number Publication date
CN107888769A (en) 2018-04-06

Similar Documents

Publication Publication Date Title
CN107888769B (en) Scene mode starting method and device and terminal
CN107765952B (en) Screen capturing method and device and terminal
CN107765775B (en) Terminal control method, device and storage medium
CN107704177B (en) Interface display method and device and terminal
CN109766053B (en) User interface display method, device, terminal and storage medium
CN107870716B (en) Method and device for calling background application program
CN107831999B (en) Screen control method and device and terminal
CN107844228B (en) Message display method and device and terminal
CN107728901B (en) Interface display method and device and terminal
CN107870724B (en) Display screen unfolding method and device and terminal
CN109993823B (en) Shadow rendering method, device, terminal and storage medium
CN107623793B (en) Image shooting processing method and device
CN109782976B (en) File processing method, device, terminal and storage medium
CN107807773B (en) Application body distinguishing method, device, terminal and storage medium
CN107770477B (en) Video call method, device, terminal and storage medium
CN107765959B (en) Background application program display method and terminal
CN107566579B (en) Shooting method, shooting device, shooting terminal and storage medium
CN107656792B (en) User interface display method and device and terminal
CN107871121B (en) Fingerprint identification method and device
CN107566660B (en) Call noise reduction method and device and terminal
CN107728810B (en) Terminal control method, device, terminal and storage medium
CN107741761B (en) Method, device, terminal and storage medium for measuring angle
EP3687161A1 (en) Image capturing method, device, terminal, and storage medium
CN111435277B (en) Method, device, terminal and storage medium for displaying content
CN111225138A (en) Camera control method and device, storage medium and terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18

Applicant after: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS Corp.,Ltd.

Address before: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18

Applicant before: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS Corp.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant