CN110536004B - Method for applying multiple sensors to electronic equipment with flexible screen and electronic equipment - Google Patents

Method for applying multiple sensors to electronic equipment with flexible screen and electronic equipment Download PDF

Info

Publication number
CN110536004B
CN110536004B CN201910667547.XA CN201910667547A CN110536004B CN 110536004 B CN110536004 B CN 110536004B CN 201910667547 A CN201910667547 A CN 201910667547A CN 110536004 B CN110536004 B CN 110536004B
Authority
CN
China
Prior art keywords
screen
application
sensor
mobile phone
electronic device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910667547.XA
Other languages
Chinese (zh)
Other versions
CN110536004A (en
Inventor
韩萍
谢偰伟
周锦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN201910667547.XA priority Critical patent/CN110536004B/en
Publication of CN110536004A publication Critical patent/CN110536004A/en
Application granted granted Critical
Publication of CN110536004B publication Critical patent/CN110536004B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/02Constructional features of telephone sets
    • H04M1/0202Portable telephone sets, e.g. cordless phones, mobile phones or bar type handsets
    • H04M1/026Details of the structure or mounting of specific components
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/02Constructional features of telephone sets
    • H04M1/0202Portable telephone sets, e.g. cordless phones, mobile phones or bar type handsets
    • H04M1/026Details of the structure or mounting of specific components
    • H04M1/0266Details of the structure or mounting of specific components for a display module assembly
    • H04M1/0268Details of the structure or mounting of specific components for a display module assembly including a flexible display panel
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/72406User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality by software upgrading or downloading
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/72427User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality for supporting games or graphical animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72448User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Telephone Function (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application provides a method for applying a multi-sensor to electronic equipment with a flexible screen and the electronic equipment, which can provide corresponding services for a user on the electronic equipment according to data of the sensor at different postures of the electronic equipment, and improve the experience of the user in using the electronic equipment with the flexible screen. The flexible screen of the electronic equipment at least comprises a first screen and a second screen, and at least one sensor is arranged on each of the first screen and the second screen. The method comprises the following steps: a first application displayed on a first screen receives first data of a sensor provided on the first screen, and the first application changes display content of the first application according to the first data; receiving, by a second application displayed on a second screen, second data of a sensor provided on the second screen; the second application changes the display content of the second application according to the second data; wherein; the first application and the second application are concurrently foreground-running.

Description

Method for applying multiple sensors to electronic equipment with flexible screen and electronic equipment
Technical Field
The application relates to the technical field of terminals, in particular to a method for applying multiple sensors to electronic equipment with a flexible screen and the electronic equipment.
Background
The functions of the mobile phone are more and more abundant, and the forms of the mobile phone are more and more diversified. Currently, some manufacturers have implemented flexible screens in cell phones. The flexible screen may also be called as a flexible OLED (organic light-emitting diode), and compared to a conventional screen, the flexible screen is not only thinner and lighter in volume, but also has a higher durability than the conventional screen due to its characteristics of being bendable and having good flexibility. As shown in fig. 1, a user may fold a cell phone having a flexible screen along one or more virtual straight lines, and may also fold at different angles. A whole flexible screen can be divided into a plurality of screens for use after being folded. The applications of the handset may run in one or more of the multiple screens. The problem that the mobile phone with the flexible screen needs to solve is that the application program of the mobile phone runs in the whole screen or in one or more screens, how the application program cooperates when running in a plurality of screens, and the like.
Disclosure of Invention
The application provides a method for applying a multi-sensor to electronic equipment with a flexible screen and the electronic equipment, which can provide corresponding services for a user on the electronic equipment according to data of the multi-sensor at different postures of the electronic equipment, and improve the experience of the user in using the electronic equipment with the flexible screen.
In order to achieve the purpose, the technical scheme is as follows:
in a first aspect, the present application provides a method for applying a multi-sensor to an electronic device having a flexible screen. The flexible screen includes at least a first screen and a second screen, the first screen and the second screen being respectively provided with at least one sensor, and the method may include: a first application displayed on a first screen receives first data of a sensor provided on the first screen, and the first application changes display content of the first application according to the first data; receiving, by a second application displayed on a second screen, second data of a sensor provided on the second screen; the second application changes the display content of the second application according to the second data; wherein; the first application and the second application are concurrently foreground-running.
The first application and the second application are simultaneously displayed on the main screen and the sub-screen, respectively, and the first application and the second application simultaneously receive data of sensors provided on the respective screens, respectively. Therefore, the application on the main screen can obtain data such as the posture of the main screen according to the sensor data on the main screen, and the display content of the application on the main screen is changed according to the sensor data of the main screen; the application on the auxiliary screen can obtain data such as the posture of the auxiliary screen according to the sensor data on the auxiliary screen, and the display content of the application on the auxiliary screen is changed according to the sensor data of the auxiliary screen; thus, the application on the main screen and the application on the auxiliary screen can realize cooperative work.
With reference to the first aspect, in a possible implementation manner, the method may further include: the first application receives second data or display content of the second application; the second application receives the first data or the display content of the first application; the first application changes the display content of the first application according to the received second data or the display content of the second application; the second application changes the display content of the second application according to the received first data or the display content of the first application.
In the method, an application displayed on the main screen or an application displayed on the sub-screen may receive output data of sensors provided on the main screen and the sub-screen. Therefore, the data such as the postures of the main screen and the auxiliary screen can be acquired by displaying the application on the main screen (or the auxiliary screen); the first application displayed on the main screen can acquire the operation of the user on the second application displayed on the auxiliary screen, so that the first application and the second application realize cooperative work.
With reference to the first aspect, in one possible implementation manner, the first application and the second application are the same application. That is, the first application and the second application are the same application program running on both screens; alternatively, the first application and the second application are the same application program displayed on both screens.
With reference to the first aspect, in one possible implementation manner, the first application and the second application are game-class applications.
With reference to the first aspect, in a possible implementation manner, the first application is a shooting application, and the second application is a video playing application or a picture displaying application.
With reference to the first aspect, in a possible implementation manner, the first application is a payment-type application, and the second application is a picture display page in the first application.
With reference to the first aspect, in a possible implementation manner, the sensor provided on the electronic device includes at least one of a gyroscope sensor, an acceleration sensor, a magnetometer, an ambient light sensor, or a proximity light sensor.
In a second aspect, the present application provides a method for applying a multi-sensor to an electronic device having a flexible screen. The electronic equipment is provided with a plurality of sensors, the flexible screen of the electronic equipment at least comprises a first screen and a second screen, the plurality of sensors comprise a first sensor and a second sensor, the first sensor is a magnetometer, the magnetometer is arranged on the first screen, and a magnetic object is arranged on the second screen, and the method can comprise the following steps: if the magnetic field change data detected by the first sensor meets a preset condition, the electronic equipment determines that the first screen and the second screen are unfolded; when the electronic equipment is in a screen-off state, the first sensor is in an open state, and the second sensor is in a closed state; in response to the first screen and the second screen being unfolded, the electronic device sets the second sensor to an open state.
In the method, the magnetometer is set to an on state, and the second sensor is set to an off state in a screen-off state and is not operated. The second sensor is activated if it is determined from the magnetometer that the electronic device is detected to be deployed. Thus, the power consumption of the electronic equipment in the screen-off state can be reduced.
With reference to the second aspect, in one possible implementation manner, the preset condition of the magnetic field variation data is that a sum of absolute values of magnetic fluxes on three axes of the coordinate system is smaller than the first threshold.
With reference to the second aspect, in a possible implementation manner, the method may further include: the electronic device illuminates the first screen and the second screen in response to the first screen and the second screen being expanded.
With reference to the second aspect, in a possible implementation manner, when an included angle between the first screen and the second screen is 0, the magnetometer and the projection of the magnetic object on the first screen partially overlap. Thus, when the two screens of the electronic device are folded together, the magnetometer is close to the magnetic object, and the magnetic flux of the magnetometer is increased; when the two screens of the electronic device are unfolded, the magnetometer is far away from the magnetic object, and the magnetic flux of the magnetometer is reduced.
With reference to the second aspect, in one possible implementation manner, the distance between the magnetometer and the magnetic object increases as the included angle between the first screen and the second screen increases. That is, as the angle between the first screen and the second screen increases, the distance between the magnetometer and the magnetic object increases, and the magnetic flux of the magnetometer decreases.
With reference to the second aspect, in one possible implementation manner, the second sensor includes at least one of a gyroscope sensor and an acceleration sensor.
In a third aspect, the present application provides a method for applying a multi-sensor to an electronic device having a flexible screen. The flexible screen at least comprises a main screen and an auxiliary screen, the main screen and the auxiliary screen are respectively provided with at least one sensor, and the method can comprise the following steps: the electronic equipment receives a first operation of a user; in response to the first operation of the user, the electronic equipment lights the flexible screen of the electronic equipment in a corresponding lighting mode according to the posture of the electronic equipment. The lighting mode comprises lighting a main screen, lighting an auxiliary screen or lighting a large screen (the large screen comprises the main screen and the auxiliary screen); the first operation includes: the user presses a power key, the user uses fingerprints or face recognition or passwords to unlock the screen, the user double-clicks the screen, calls are made, external equipment is inserted, or electronic equipment is unfolded; the pose of the electronic device comprises at least one of: the screen display device comprises a standing main screen upward state, a standing auxiliary screen upward state, a folding handheld state, a support handheld state, a double-support transverse screen state, a double-support vertical screen state, a single-support main screen state, a single-support auxiliary screen state and a large screen unfolding state.
In the method, an application of the electronic device may light up a corresponding screen according to a gesture of the electronic device. Compared with a single method for lighting a certain screen, the method for lighting the screen in the self-adaptive posture brings better experience to a user.
With reference to the third aspect, in one possible implementation manner, in response to a first operation of a user, an electronic device lights up a flexible screen of the electronic device in a corresponding lighting manner according to a posture of the electronic device, including: if the electronic equipment is in a standing main screen upward state, the electronic equipment responds to a first operation of a user and lights up the main screen; if the electronic equipment is in a standing secondary screen upward state, the electronic equipment responds to a first operation of a user and lights the secondary screen; if the electronic equipment is in a bracket holding state, a double-bracket horizontal screen state, a double-bracket vertical screen state or a single-bracket main screen state, the electronic equipment responds to a first operation of a user and lights the main screen; if the electronic equipment is in the single-support secondary screen state, the electronic equipment responds to a first operation of a user and lights the secondary screen; if the electronic equipment is in a large screen unfolding state, the electronic equipment responds to a first operation of a user and lights the large screen; if the electronic equipment is in the folding handheld state, the electronic equipment responds to the first operation of the user, the screen of the electronic equipment facing the user is lightened, and the screen of the electronic equipment facing the user is a main screen or an auxiliary screen.
With reference to the third aspect, in one possible implementation manner, the plurality of sensors include a gyroscope sensor and an acceleration sensor, and the electronic device may determine the posture of the electronic device by combining data of the sensors on the main screen and the sub screen. For example, the electronic device obtains the gravitational acceleration of the main screen according to the rotational angular velocity output by the gyro sensor arranged on the main screen and the acceleration output by the acceleration sensor arranged on the main screen; the electronic equipment acquires the gravity acceleration of the auxiliary screen according to the rotation angular velocity output by the gyroscope sensor arranged on the auxiliary screen and the acceleration output by the acceleration sensor arranged on the auxiliary screen; the electronic equipment acquires an included angle between the main screen and the auxiliary screen according to the gravity acceleration of the main screen and the gravity acceleration of the auxiliary screen; the electronic equipment determines the posture of the electronic equipment according to the gravity acceleration of the main screen, the gravity acceleration of the auxiliary screen and the included angle between the main screen and the auxiliary screen. In this way, the electronic device can light the flexible screen of the electronic device in a corresponding lighting manner according to the posture of the electronic device.
With reference to the third aspect, in a possible implementation manner, the sensor may include a gyroscope sensor and an acceleration sensor, and the electronic device may determine the posture of the main screen according to sensor data of the main screen and determine the posture of the sub screen according to sensor data of the sub screen, so as to determine the posture of the electronic device. For example, the electronic device obtains the gravitational acceleration of the main screen according to the rotational angular velocity output by the gyro sensor arranged on the main screen and the acceleration output by the acceleration sensor arranged on the main screen; determining the posture of the main screen according to the gravity acceleration of the main screen; the electronic equipment acquires the gravity acceleration of the auxiliary screen according to the rotation angular velocity output by the gyroscope sensor arranged on the auxiliary screen and the acceleration output by the acceleration sensor arranged on the auxiliary screen; determining the posture of the secondary screen according to the gravity acceleration of the secondary screen; the electronic device determines the attitude of the electronic device according to the attitude of the main screen and the attitude of the sub-screen.
With reference to the third aspect, in a possible implementation manner, the main screen and the secondary screen are respectively provided with a touch device, and the electronic device determines, by using a preset grasping algorithm, a grasping posture of the user for currently grasping the electronic device according to a current touch position of the user reported by the touch device on the main screen and the touch device on the secondary screen; and determining the screen of the electronic equipment facing the user according to the grasping posture of the user, wherein the screen facing the user is a main screen or an auxiliary screen.
In a fourth aspect, the present application provides a method for applying a multi-sensor to an electronic device having a flexible screen. The flexible screen of the electronic equipment at least comprises a main screen and an auxiliary screen, and the main screen and the auxiliary screen are respectively provided with a plurality of sensors. The electronic equipment determines the change of the physical form of the electronic equipment according to the data of the sensors; if the physical form of the electronic equipment is determined to be changed from the first physical form to the second physical form, the electronic equipment displays a first object on the flexible screen; and if the physical form of the electronic equipment is determined to be changed from the second physical form to the first physical form, the electronic equipment displays the second object on the flexible screen. The physical form of the electronic device comprises an unfolding state, a support state and a folding state.
In the method, the electronic device detects that the electronic device is unfolded or folded according to data of the sensor, and different display objects can be displayed, for example, the electronic device can display an open screen dynamic effect when being unfolded; and when the electronic equipment is detected to be folded, the screen closing action can be displayed.
With reference to the fourth aspect, in one possible implementation manner, changing a physical form of the electronic device from a first physical form to a second physical form includes: the physical form of the electronic equipment is changed from a folded state to a support state, or from the support state to an unfolded state, or from the folded state to the unfolded state; the change of the physical form of the electronic equipment from the second physical form to the first physical form comprises the following steps: the physical form of the electronic device is changed from an unfolded state to a support state, or from the support state to a folded state, or from the unfolded state to the folded state.
With reference to the fourth aspect, in a possible implementation manner, the plurality of sensors include a gyroscope sensor and an acceleration sensor, and the electronic device obtains the gravitational acceleration of the main screen according to a rotational angular velocity output by the gyroscope sensor arranged on the main screen and an acceleration output by the acceleration sensor arranged on the main screen; the electronic equipment acquires the gravity acceleration of the auxiliary screen according to the rotation angular velocity output by the gyroscope sensor arranged on the auxiliary screen and the acceleration output by the acceleration sensor arranged on the auxiliary screen; the electronic equipment acquires an included angle between the main screen and the auxiliary screen according to the gravity acceleration of the main screen and the gravity acceleration of the auxiliary screen; and the electronic equipment determines the physical form of the electronic equipment according to the included angle between the main screen and the auxiliary screen.
In a fifth aspect, the present application provides a method for applying multiple sensors to an electronic device having a flexible screen, where the flexible screen of the electronic device includes at least a main screen and a sub-screen, the main screen and the sub-screen are respectively provided with at least one sensor, and the at least one sensor includes a gyroscope sensor. In a folded state of the electronic device, a first screen of the electronic device is illuminated; the electronic equipment determines that the operation of turning over the mobile phone by the user is detected according to the gyroscope sensor; responding to the operation of turning over the mobile phone by a user, the electronic equipment turns off the first screen and lights up the second screen; the first screen is a main screen or an auxiliary screen, the second screen is an auxiliary screen when the first screen is the main screen, and the second screen is the main screen when the first screen is the auxiliary screen. Therefore, the electronic equipment can determine to execute the screen switching action according to the data of the gyroscope sensor, and the screen switching error probability is reduced.
With reference to the fifth aspect, in a possible implementation manner, before the electronic device turns off the first screen and lights up the second screen in response to an operation of a user turning over the mobile phone, the electronic device detects that a turning process of the electronic device is stopped. That is to say, in the process of electronic equipment many times upset, do not carry out the operation of cutting the screen, can reduce like this mistake and cut the screen probability to improve user's use and experience.
With reference to the fifth aspect, in a possible implementation manner, the electronic device determines a rotation angle of the electronic device per unit time length according to data of the gyro sensor; and if the rotation angle of the electronic equipment in the unit time length is determined to be equal to zero, the electronic equipment determines that the overturning process of the electronic equipment is detected to be stopped.
With reference to the fifth aspect, in a possible implementation manner, after responding to an operation of a user turning over a mobile phone, an electronic device turns off a first screen and lights a second screen, if the electronic device determines that a currently lit screen is the second screen and determines that the second screen faces a palm of the user according to a gripping gesture of the user gripping the electronic device, the electronic device turns off the second screen and lights the first screen. In the method, if the currently lighted screen is determined to face the palm of the user according to the grasping posture of the user, the electronic device can correct the error and perform a screen cutting action to light the screen facing the user.
With reference to the fifth aspect, in one possible implementation manner, the electronic device detects a grip gesture of a user gripping the electronic device; the first screen is determined to face the palm of the user according to the grasping gesture of the user grasping the electronic device.
With reference to the fifth aspect, in a possible implementation manner, the electronic device determines, according to the gyroscope sensor, that it is detected that an angle of rotation of the electronic device along one axis of the coordinate system is greater than a first angle, and then determines that an operation of turning over the mobile phone by the user is detected.
With reference to the fifth aspect, in a possible implementation manner, the electronic device may further determine that the electronic device is in the folded state according to data of the plurality of sensors. In one possible implementation manner, the plurality of sensors include a gyroscope sensor and an acceleration sensor, and the electronic device acquires the gravitational acceleration of the main screen according to a rotational angular velocity output by the gyroscope sensor arranged on the main screen and an acceleration output by the acceleration sensor arranged on the main screen; the electronic equipment acquires the gravity acceleration of the auxiliary screen according to the rotation angular velocity output by the gyroscope sensor arranged on the auxiliary screen and the acceleration output by the acceleration sensor arranged on the auxiliary screen; the electronic equipment acquires an included angle between the main screen and the auxiliary screen according to the gravity acceleration of the main screen and the gravity acceleration of the auxiliary screen; the electronic equipment determines the physical form of the electronic equipment according to the included angle between the main screen and the auxiliary screen; the physical form of the electronic device includes an unfolded state, a stand state, and a folded state.
With reference to the fifth aspect, in a possible implementation manner, the determining, by the electronic device, a physical form of the electronic device according to an included angle between the main screen and the sub-screen includes: if the included angle between the main screen and the auxiliary screen is larger than a first threshold value and smaller than or equal to 180 degrees, the electronic equipment determines that the electronic equipment is in the unfolding state; if the included angle between the main screen and the auxiliary screen is larger than or equal to 0 degrees and smaller than or equal to a second threshold value, the electronic equipment determines that the electronic equipment is in a folded state; wherein the second threshold is less than the first threshold; and if the included angle between the main screen and the auxiliary screen is larger than the second threshold value and smaller than or equal to the first threshold value, the electronic equipment determines that the electronic equipment is in the bracket state.
With reference to the fifth aspect, in a possible implementation manner, the determining, by the electronic device according to data of the plurality of sensors, that the electronic device is in the folded state includes: if the sum of the absolute values of the magnetic fluxes of the magnetometers on the three axes of the coordinate system is determined to be greater than or equal to a first threshold, the electronic device determines that the electronic device is in a folded state.
In a sixth aspect, the present application provides an electronic device, comprising: a flexible screen, one or more sensors, one or more processors, one or more memories, and one or more computer programs; wherein a processor is coupled to the sensor, the flexible screen and the memory, wherein the one or more computer programs are stored in the memory, and wherein when the electronic device is run, the processor executes the one or more computer programs stored in the memory to cause the electronic device to perform the method of any of the above aspects and alternative implementations thereof.
In a seventh aspect, the present application provides a computer storage medium comprising computer instructions that, when executed on an electronic device, cause the electronic device to perform the method of any of the above aspects and optional implementations thereof.
In an eighth aspect, the present application provides a computer program product, which, when run on an electronic device, causes the electronic device to perform the method of any of the above aspects and alternative implementations thereof.
It is to be understood that the electronic device according to the sixth aspect, the computer storage medium according to the seventh aspect, and the computer program product according to the eighth aspect are all configured to execute the corresponding method provided above, and therefore, the beneficial effects achieved by the electronic device can refer to the beneficial effects in the corresponding method provided above, and are not described herein again.
Drawings
FIG. 1 is a schematic diagram of an electronic device with a flexible screen;
fig. 2A is a first schematic structural diagram of an electronic device with a flexible screen according to an embodiment of the present disclosure;
fig. 2B is a schematic structural diagram of an electronic device with a flexible screen according to an embodiment of the present disclosure;
fig. 3 is a schematic structural diagram three of an electronic device having a flexible screen according to an embodiment of the present application;
fig. 4 is a schematic software architecture diagram of an electronic device with a flexible screen according to an embodiment of the present application;
fig. 5 is a first schematic diagram illustrating a method for applying a multi-sensor to an electronic device having a flexible screen according to an embodiment of the present disclosure;
fig. 6A is a schematic structural diagram of an electronic device with a flexible screen according to an embodiment of the present disclosure;
fig. 6B is a schematic diagram illustrating a method for applying a multi-sensor to an electronic device having a flexible screen according to an embodiment of the present disclosure;
FIG. 7 is a schematic of a six-axis A + G fusion algorithm;
fig. 8A is a schematic view illustrating a method for applying a multi-sensor to an electronic device having a flexible screen according to an embodiment of the present application;
fig. 8B is a schematic diagram of a method for applying a multi-sensor to an electronic device having a flexible screen according to an embodiment of the present application;
fig. 9 is a schematic diagram of a fifth method for applying a multi-sensor to an electronic device with a flexible screen according to an embodiment of the present application;
fig. 10A is a first schematic view of a scenario illustrating a method for applying a multi-sensor to an electronic device having a flexible screen according to an embodiment of the present application;
fig. 10B is a schematic view of a second scenario of a method for applying a multi-sensor to an electronic device with a flexible screen according to an embodiment of the present application;
fig. 10C is a schematic view of a third scenario of a method for applying a multi-sensor to an electronic device with a flexible screen according to an embodiment of the present application;
fig. 11A is a schematic structural diagram of an electronic device with a flexible screen according to an embodiment of the present application;
fig. 11B is a sixth schematic view of a method for applying a multi-sensor to an electronic device with a flexible screen according to an embodiment of the present application;
fig. 11C is a schematic diagram seven of a method for applying a multi-sensor to an electronic device with a flexible screen according to an embodiment of the present application;
fig. 12 is a sixth schematic structural diagram of an electronic device with a flexible screen according to an embodiment of the present application;
fig. 13 is a fourth schematic view of a scene of a method for applying multiple sensors to an electronic device with a flexible screen according to an embodiment of the present application;
fig. 14 is a scene schematic diagram five of a method for applying multiple sensors to an electronic device with a flexible screen according to an embodiment of the present application;
fig. 15 is a sixth schematic view of a scene of a method for applying multiple sensors to an electronic device with a flexible screen according to an embodiment of the present application;
fig. 16A is a schematic view illustrating a scene seven of a method for applying a multi-sensor to an electronic device with a flexible screen according to an embodiment of the present application;
fig. 16B is a schematic view illustrating a scene eight of a method for applying multiple sensors to an electronic device with a flexible screen according to an embodiment of the present application;
fig. 16C is a diagram illustrating a ninth scenario of a method for applying a multi-sensor to an electronic device with a flexible screen according to an embodiment of the present application;
fig. 16D is a scene schematic diagram ten illustrating a method for applying multiple sensors to an electronic device with a flexible screen according to an embodiment of the present application;
fig. 17 is an eleventh schematic view of a scene of a method for applying a multi-sensor to an electronic device with a flexible screen according to an embodiment of the present application;
fig. 18 is a schematic view eight of a method for applying a multi-sensor to an electronic device with a flexible screen according to an embodiment of the present application;
fig. 19 is a schematic structural diagram of an electronic device with a flexible screen according to an embodiment of the present application;
fig. 20 is a seventh structural schematic diagram of an electronic device with a flexible screen according to an embodiment of the present application.
Detailed Description
The electronic device with the flexible screen can be unfolded for use, and can also be folded along one or more folding lines for use. The position of the folding line can be preset or can be arbitrarily selected by a user.
Referring to fig. 2A (a), the electronic device is in an unfolded state. The user folds the electronic setup shown in fig. 2A (a) along a folding line. As shown in (b) of fig. 2A, after the user folds the electronic apparatus along the folding line AB, the flexible screen of the electronic apparatus is divided into two display areas, i.e., a display area 1 and a display area 2, along the folding line AB. In the embodiment of the present application, the folded display area 1 and the display area 2 may be displayed as two independent display areas. For example, the display area 1 may be referred to as a main screen of the electronic device, and the display area 2 may be referred to as a sub-screen of the electronic device. The display areas of the main screen and the sub screen may be the same or different. An included angle α is formed between the main screen and the sub-screen, as shown in fig. 2A (b), and the electronic device is in a stand state. The user may continue to fold the electronic apparatus along the folding line AB, as shown in fig. 2A (c), with the electronic apparatus in a folded state.
It is understood that the display area 1 and the display area 2 may have other names, for example, the display area 1 is referred to as an a screen of the electronic device, and the display area 2 is referred to as a B screen of the electronic device; alternatively, the display area 1 is referred to as a sub-screen of the electronic device, and the display area 2 is referred to as a main screen of the electronic device; this is not limited in the examples of the present application. In the embodiment of the present application, a description will be given by taking the display area 1 as a main screen and the display area 2 as a sub screen as an example.
It should be noted that the electronic device may be folded outwards along the folding line, and the folded screen is outward, that is, the main screen and the auxiliary screen deviate from each other; or folded inwards along the folding line, and the folded screen is inwards, namely the main screen and the auxiliary screen are oppositely arranged. The electronic equipment is folded outwards in the embodiment of the application.
According to the included angle between the main screen and the auxiliary screen of the electronic equipment, the physical form of the electronic equipment can be divided into an unfolding state, a support state and a folding state. Illustratively, as shown in fig. 2A (a), when an angle α between the main screen and the sub-screen is greater than a first threshold value (e.g., 165 °), the electronic device is in the unfolded state. As shown in fig. 2A (b), when the angle α between the main screen and the sub screen is less than or equal to a first threshold value and greater than a second threshold value (e.g., 20 °), the electronic apparatus is in the stand state. As shown in (c) of fig. 2A, when the angle α between the main screen and the sub screen is less than or equal to a second threshold (e.g., 20 °), the electronic apparatus is in the folded state. It will be appreciated that the angle α between the primary and secondary screens is within a closed interval of 0 to 180 °.
Correspondingly, as shown in fig. 2B (a), when the included angle α between the main screen and the sub screen is greater than a first threshold (e.g., 165 °), the electronic device is in the unfolded state. As shown in (B) of fig. 2B, when the angle α between the main screen and the sub screen is less than or equal to a first threshold value and greater than a second threshold value (e.g., 20 °), the electronic apparatus is in the stand state. As shown in (c) of fig. 2B, when the angle α between the main screen and the sub screen is less than or equal to a second threshold (e.g., 20 °), the electronic apparatus is in the folded state. Wherein the position of the AB fold line may be referred to as the pivot axis.
The electronic equipment can be in different physical forms under different use requirements of users. For example, when a user watches a video using an electronic device, a better experience can be generally obtained in the unfolded state; when a user needs to watch the video, the electronic equipment is unfolded for use. When a user plays a game, a plurality of users can play the game together, for example, the electronic device is folded to a stand state, one user uses the main screen, and one user uses the sub-screen. When a user needs to carry the electronic equipment, the electronic equipment is more convenient to be in a folded state. Because the electronic equipment with the flexible screen can have various physical forms, the scenes of using the electronic equipment by a user are richer; the electronic equipment can provide more flexible and various services for users.
The embodiment of the application provides a method for applying multiple sensors to electronic equipment with a flexible screen, which can utilize the multiple sensors to acquire parameters such as physical form, posture, motion direction and the like of the electronic equipment, provide corresponding services of a scene when a user uses the electronic equipment, and improve the use experience of the user.
The method for applying the multi-sensor to the electronic device with the flexible screen provided by the embodiment of the application can be applied to the electronic devices with the flexible screen, such as a mobile phone, a tablet computer, a notebook computer, a super-mobile personal computer (UMPC), a handheld computer, a netbook, a Personal Digital Assistant (PDA), a wearable device, and a virtual reality device, and the embodiment of the application does not limit the method.
Taking the mobile phone 100 as an example of the above-mentioned electronic device, fig. 3 shows a schematic structural diagram of the mobile phone.
The mobile phone 100 may include a processor 110, an external memory interface 120, an internal memory 121, a Universal Serial Bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, a key 190, a motor 191, an indicator 192, a camera 193, a flexible screen 194, a Subscriber Identity Module (SIM) card interface 195, and the like.
It is to be understood that the illustrated structure of the embodiment of the present application does not specifically limit the mobile phone 100. In other embodiments of the present application, the handset 100 may include more or fewer components than shown, or some components may be combined, some components may be separated, or a different arrangement of components may be used. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Processor 110 may include one or more processing units, such as: the processor 110 may include an Application Processor (AP), a modem processor, a Graphics Processing Unit (GPU), an Image Signal Processor (ISP), a controller, a memory, a video codec, a Digital Signal Processor (DSP), a baseband processor, and/or a neural-Network Processing Unit (NPU), etc. The different processing units may be separate devices or may be integrated into one or more processors.
The controller may be a neural center and a command center of the cell phone 100, among others. The controller can generate an operation control signal according to the instruction operation code and the timing signal to complete the control of instruction fetching and instruction execution.
A memory may also be provided in processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that have just been used or recycled by the processor 110. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Avoiding repeated accesses reduces the latency of the processor 110, thereby increasing the efficiency of the system.
In some embodiments, processor 110 may include one or more interfaces. The interface may include an integrated circuit (I2C) interface, an integrated circuit built-in audio (I2S) interface, a Pulse Code Modulation (PCM) interface, a universal asynchronous receiver/transmitter (UART) interface, a Mobile Industry Processor Interface (MIPI), a general-purpose input/output (GPIO) interface, a Subscriber Identity Module (SIM) interface, and/or a Universal Serial Bus (USB) interface, etc.
The I2C interface is a bi-directional synchronous serial bus that includes a serial data line (SDA) and a Serial Clock Line (SCL). In some embodiments, processor 110 may include multiple sets of I2C buses. The processor 110 may be coupled to the touch sensor 180L, the charger, the flash, the camera 193, etc. through different I2C bus interfaces, respectively. For example: the processor 110 may be coupled to the touch sensor 180L through an I2C interface, so that the processor 110 and the touch sensor 180L communicate through an I2C bus interface to implement the touch function of the mobile phone 100.
The I2S interface may be used for audio communication. In some embodiments, processor 110 may include multiple sets of I2S buses. The processor 110 may be coupled to the audio module 170 via an I2S bus to enable communication between the processor 110 and the audio module 170. In some embodiments, the audio module 170 may communicate audio signals to the wireless communication module 160 via the I2S interface, enabling answering of calls via a bluetooth headset.
The PCM interface may also be used for audio communication, sampling, quantizing and encoding analog signals. In some embodiments, the audio module 170 and the wireless communication module 160 may be coupled by a PCM bus interface. In some embodiments, the audio module 170 may also transmit audio signals to the wireless communication module 160 through the PCM interface, so as to implement a function of answering a call through a bluetooth headset. Both the I2S interface and the PCM interface may be used for audio communication.
The UART interface is a universal serial data bus used for asynchronous communications. The bus may be a bidirectional communication bus. It converts the data to be transmitted between serial communication and parallel communication. In some embodiments, a UART interface is generally used to connect the processor 110 with the wireless communication module 160. For example: the processor 110 communicates with a bluetooth module in the wireless communication module 160 through a UART interface to implement a bluetooth function. In some embodiments, the audio module 170 may transmit the audio signal to the wireless communication module 160 through a UART interface, so as to realize the function of playing music through a bluetooth headset.
MIPI interfaces may be used to connect processor 110 with peripheral devices such as flexible screen 194, camera 193, and the like. The MIPI interface includes a Camera Serial Interface (CSI), a Display Serial Interface (DSI), and the like. In some embodiments, the processor 110 and the camera 193 communicate through a CSI interface to implement the camera function of the handset 100. The processor 110 and the flexible screen 194 communicate through the DSI interface to implement the display function of the mobile phone 100.
The GPIO interface may be configured by software. The GPIO interface may be configured as a control signal and may also be configured as a data signal. In some embodiments, a GPIO interface may be used to connect the processor 110 with the camera 193, the flexible screen 194, the wireless communication module 160, the audio module 170, the sensor module 180, and the like. The GPIO interface may also be configured as an I2C interface, an I2S interface, a UART interface, a MIPI interface, and the like.
The USB interface 130 is an interface conforming to the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, or the like. The USB interface 130 may be used to connect a charger to charge the mobile phone 100, and may also be used to transmit data between the mobile phone 100 and peripheral devices. And the earphone can also be used for connecting an earphone and playing audio through the earphone. The interface may also be used to connect other electronic devices, such as AR devices and the like.
It should be understood that the interface connection relationship between the modules illustrated in the embodiment of the present application is only an exemplary illustration, and does not constitute a limitation on the structure of the mobile phone 100. In other embodiments of the present application, the mobile phone 100 may also adopt different interface connection manners or a combination of multiple interface connection manners in the above embodiments.
The charging management module 140 is configured to receive charging input from a charger. The charger may be a wireless charger or a wired charger. In some wired charging embodiments, the charging management module 140 may receive charging input from a wired charger via the USB interface 130. In some wireless charging embodiments, the charging management module 140 may receive a wireless charging input through a wireless charging coil of the cell phone 100. The charging management module 140 may also supply power to the electronic device through the power management module 141 while charging the battery 142.
The power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charge management module 140 and provides power to the processor 110, the internal memory 121, the external memory, the flexible screen 194, the camera 193, the wireless communication module 160, and the like. The power management module 141 may also be used to monitor parameters such as battery capacity, battery cycle count, battery state of health (leakage, impedance), etc. In some other embodiments, the power management module 141 may also be disposed in the processor 110. In other embodiments, the power management module 141 and the charging management module 140 may be disposed in the same device.
The wireless communication function of the mobile phone 100 can be realized by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, the modem processor, the baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the handset 100 may be used to cover a single or multiple communication bands. Different antennas can also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed as a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 150 may provide a solution including wireless communication of 2G/3G/4G/5G, etc. applied to the handset 100. The mobile communication module 150 may include at least one filter, a switch, a power amplifier, a Low Noise Amplifier (LNA), and the like. The mobile communication module 150 may receive the electromagnetic wave from the antenna 1, filter, amplify, etc. the received electromagnetic wave, and transmit the electromagnetic wave to the modem processor for demodulation. The mobile communication module 150 may also amplify the signal modulated by the modem processor, and convert the signal into electromagnetic wave through the antenna 1 to radiate the electromagnetic wave. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the processor 110. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the same device as at least some of the modules of the processor 110.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating a low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then passes the demodulated low frequency baseband signal to a baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then transferred to the application processor. The application processor outputs sound signals through an audio device (not limited to the speaker 170A, the receiver 170B, etc.) or displays images or video through the flexible screen 194. In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be provided in the same device as the mobile communication module 150 or other functional modules, independent of the processor 110.
The wireless communication module 160 may provide solutions for wireless communication applied to the mobile phone 100, including Wireless Local Area Networks (WLANs) (e.g., wireless fidelity (Wi-Fi) networks), Bluetooth (BT), Global Navigation Satellite System (GNSS), Frequency Modulation (FM), Near Field Communication (NFC), Infrared (IR), and the like. The wireless communication module 160 may be one or more devices integrating at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via the antenna 2, performs frequency modulation and filtering processing on electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 160 may also receive a signal to be transmitted from the processor 110, perform frequency modulation and amplification on the signal, and convert the signal into electromagnetic waves through the antenna 2 to radiate the electromagnetic waves.
In some embodiments, the antenna 1 of the handset 100 is coupled to the mobile communication module 150 and the antenna 2 is coupled to the wireless communication module 160 so that the handset 100 can communicate with networks and other devices through wireless communication techniques. The wireless communication technology may include global system for mobile communications (GSM), General Packet Radio Service (GPRS), code division multiple access (code division multiple access, CDMA), Wideband Code Division Multiple Access (WCDMA), time-division code division multiple access (time-division code division multiple access, TD-SCDMA), Long Term Evolution (LTE), LTE, BT, GNSS, WLAN, NFC, FM, and/or IR technologies, etc. The GNSS may include a Global Positioning System (GPS), a global navigation satellite system (GLONASS), a beidou navigation satellite system (BDS), a quasi-zenith satellite system (QZSS), and/or a Satellite Based Augmentation System (SBAS).
The mobile phone 100 implements the display function through the GPU, the flexible screen 194, and the application processor. The GPU is a microprocessor for image processing, connected to the flexible screen 194 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. The processor 110 may include one or more GPUs that execute program instructions to generate or alter display information. In the embodiment of the present application, a display and a touch device (TP) may be included in the flexible screen 194. The display is used to output display content to a user and the touch device is used to receive touch events input by the user on the flexible screen 194.
The mobile phone 100 can implement a shooting function through the ISP, the camera 193, the video codec, the GPU, the flexible screen 194, the application processor, and the like.
The ISP is used to process the data fed back by the camera 193. For example, when a photo is taken, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing and converting into an image visible to naked eyes. The ISP can also carry out algorithm optimization on the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in camera 193.
The camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image to the photosensitive element. The photosensitive element may be a Charge Coupled Device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The light sensing element converts the optical signal into an electrical signal, which is then passed to the ISP where it is converted into a digital image signal. And the ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into image signal in standard RGB, YUV and other formats. In some embodiments, the handset 100 may include 1 or N cameras 193, N being a positive integer greater than 1.
The digital signal processor is used for processing digital signals, and can process digital image signals and other digital signals. For example, when the handset 100 is in frequency bin selection, the digital signal processor is used to perform fourier transform or the like on the frequency bin energy.
Video codecs are used to compress or decompress digital video. Handset 100 may support one or more video codecs. Thus, the handset 100 can play or record video in a variety of encoding formats, such as: moving Picture Experts Group (MPEG) 1, MPEG2, MPEG3, MPEG4, and the like.
The NPU is a neural-network (NN) computing processor that processes input information quickly by using a biological neural network structure, for example, by using a transfer mode between neurons of a human brain, and can also learn by itself continuously. The NPU can realize applications such as intelligent recognition of the mobile phone 100, for example: image recognition, face recognition, speech recognition, text understanding, and the like.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to extend the storage capability of the mobile phone 100. The external memory card communicates with the processor 110 through the external memory interface 120 to implement a data storage function. For example, files such as music, video, etc. are saved in an external memory card.
The internal memory 121 may be used to store computer-executable program code, which includes instructions. The processor 110 executes various functional applications of the cellular phone 100 and data processing by executing instructions stored in the internal memory 121. The internal memory 121 may include a program storage area and a data storage area. The storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required by at least one function, and the like. The data storage area may store data (e.g., audio data, a phonebook, etc.) created during use of the handset 100, and the like. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (UFS), and the like.
The mobile phone 100 can implement audio functions through the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the earphone interface 170D, and the application processor. Such as music playing, recording, etc.
The audio module 170 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be disposed in the processor 110, or some functional modules of the audio module 170 may be disposed in the processor 110.
The speaker 170A, also called a "horn", is used to convert the audio electrical signal into an acoustic signal. The cellular phone 100 can listen to music through the speaker 170A or listen to a hands-free call.
The receiver 170B, also called "earpiece", is used to convert the electrical audio signal into an acoustic signal. When the cellular phone 100 receives a call or voice information, it is possible to receive voice by placing the receiver 170B close to the ear of the person.
The microphone 170C, also referred to as a "microphone," is used to convert sound signals into electrical signals. When making a call or transmitting voice information, the user can input a voice signal to the microphone 170C by speaking the user's mouth near the microphone 170C. The handset 100 may be provided with at least one microphone 170C. In other embodiments, the handset 100 may be provided with two microphones 170C to achieve noise reduction functions in addition to collecting sound signals. In other embodiments, the mobile phone 100 may further include three, four or more microphones 170C to collect sound signals, reduce noise, identify sound sources, and implement directional recording functions.
The headphone interface 170D is used to connect a wired headphone. The headset interface 170D may be the USB interface 130, or may be a 3.5mm open mobile electronic device platform (OMTP) standard interface, a cellular telecommunications industry association (cellular telecommunications industry association of the USA, CTIA) standard interface.
The pressure sensor 180A is used for sensing a pressure signal, and converting the pressure signal into an electrical signal. In some embodiments, the pressure sensor 180A may be disposed on the flexible screen 194. The pressure sensor 180A can be of a wide variety, such as a resistive pressure sensor, an inductive pressure sensor, a capacitive pressure sensor, and the like. The capacitive pressure sensor may be a sensor comprising at least two parallel plates having an electrically conductive material. When a force acts on the pressure sensor 180A, the capacitance between the electrodes changes. The handset 100 determines the intensity of the pressure from the change in capacitance. When a touch operation is applied to the flexible screen 194, the mobile phone 100 detects the intensity of the touch operation according to the pressure sensor 180A. The cellular phone 100 can also calculate the touched position based on the detection signal of the pressure sensor 180A. In some embodiments, the touch operations that are applied to the same touch position but different touch operation intensities may correspond to different operation instructions. For example: and when the touch operation with the touch operation intensity smaller than the first pressure threshold value acts on the short message application icon, executing an instruction for viewing the short message. And when the touch operation with the touch operation intensity larger than or equal to the first pressure threshold value acts on the short message application icon, executing an instruction of newly building the short message.
The air pressure sensor 180B is used to measure air pressure. In some embodiments, the handset 100 calculates altitude, aiding in positioning and navigation, from the barometric pressure measured by the barometric pressure sensor 180B.
The gyro sensor 180C may be used to determine the motion attitude of the cellular phone 100. In some embodiments, the angular velocity of the handpiece 100 about three axes (i.e., x, y, and z axes) may be determined by the gyro sensor 180C. The gyro sensor 180C may be used to photograph anti-shake. Illustratively, when the shutter is pressed, the gyro sensor 180C detects the shake angle of the mobile phone 100, calculates the distance to be compensated for the lens module according to the shake angle, and allows the lens to counteract the shake of the mobile phone 100 through a reverse movement, thereby achieving anti-shake. The gyroscope sensor 180C may also be used for navigation, somatosensory gaming scenes.
The acceleration sensor 180D can detect the magnitude of acceleration of the cellular phone 100 in various directions (typically three axes). The magnitude and direction of gravity can be detected when the handset 100 is stationary.
The magnetic sensor 180E includes a hall sensor, a magnetometer, and the like. The Hall sensor can detect the direction of the magnetic field; magnetometers are used to measure the magnitude and direction of magnetic fields. The magnetometer can measure the ambient magnetic field strength, for example, the magnetometer can be used to measure the magnetic field strength so as to obtain azimuth information of the carrier of the magnetometer.
The touch device 180F may be used to detect a touch position of the user. In some embodiments, a touch point of the user on the mobile phone 100 may be detected through the touch device 180F, and then a grip gesture of the user currently gripping the mobile phone is determined according to the touch position by using a preset grip algorithm.
And a distance sensor 180G for measuring a distance. The handset 100 may measure distance by infrared or laser. In some embodiments, taking a picture of a scene, the cell phone 100 may utilize the range sensor 180G to range for fast focus.
The proximity light sensor 180H may include, for example, a Light Emitting Diode (LED) and a light detector, such as a photodiode. The light emitting diode may be an infrared light emitting diode. The cellular phone 100 emits infrared light to the outside through the light emitting diode. The handset 100 uses a photodiode to detect infrared reflected light from nearby objects. When sufficient reflected light is detected, it can be determined that there is an object near the cell phone 100. When insufficient reflected light is detected, the cell phone 100 can determine that there are no objects near the cell phone 100. The mobile phone 100 can detect that the mobile phone 100 is held by the user and close to the ear for communication by using the proximity light sensor 180H, so as to automatically turn off the screen to achieve the purpose of saving power. The proximity light sensor 180H may also be used in a holster mode, a pocket mode automatically unlocks and locks the screen.
The ambient light sensor 180M is used to sense the ambient light level. The handset 100 may adaptively adjust the brightness of the flexible screen 194 based on the perceived ambient light level. The ambient light sensor 180M may also be used to automatically adjust the white balance when taking a picture. The ambient light sensor 180M may also cooperate with the proximity light sensor 180H to detect whether the mobile phone 100 is in a pocket to prevent accidental touches.
The fingerprint sensor 180J is used to collect a fingerprint. The mobile phone 100 can utilize the collected fingerprint characteristics to unlock the fingerprint, access the application lock, take a photograph of the fingerprint, answer an incoming call with the fingerprint, and the like.
The temperature sensor 180K is used to detect temperature. In some embodiments, the handset 100 implements a temperature processing strategy using the temperature detected by the temperature sensor 180K. For example, when the temperature reported by the temperature sensor 180K exceeds the threshold, the mobile phone 100 performs a reduction in performance of the processor located near the temperature sensor 180K, so as to reduce power consumption and implement thermal protection. In other embodiments, the cell phone 100 heats the battery 142 when the temperature is below another threshold to avoid an abnormal shutdown of the cell phone 100 due to low temperatures. In other embodiments, when the temperature is lower than a further threshold, the mobile phone 100 boosts the output voltage of the battery 142 to avoid abnormal shutdown due to low temperature.
The touch sensor 180L is also referred to as a "touch panel". The touch sensor 180L may be disposed on the flexible screen 194, and the touch sensor 180L and the flexible screen 194 form a touch screen, which is also called a "touch screen". The touch sensor 180L is used to detect a touch operation applied thereto or thereabout. The touch sensor can communicate the detected touch operation to the application processor to determine the touch event type. Visual output related to touch operations may be provided through the flexible screen 194. In other embodiments, the touch sensor 180L may be disposed on the surface of the mobile phone 100, different from the position of the flexible screen 194.
The bone conduction sensor 180Q may acquire a vibration signal. In some embodiments, the bone conduction sensor 180Q may acquire a vibration signal of the human vocal part vibrating the bone mass. The bone conduction sensor 180Q may also contact the human pulse to receive the blood pressure pulsation signal. In some embodiments, the bone conduction sensor 180Q may also be disposed in a headset, integrated into a bone conduction headset. The audio module 170 may analyze a voice signal based on the vibration signal of the bone mass vibrated by the sound part acquired by the bone conduction sensor 180Q, so as to implement a voice function. The application processor can analyze heart rate information based on the blood pressure beating signal acquired by the bone conduction sensor 180Q, so as to realize the heart rate detection function.
In the embodiment of the application, multiple sets of sensors can be arranged on the electronic equipment. For example, sensor modules are respectively arranged on a main screen and an auxiliary screen of the electronic device. Illustratively, a main screen of the electronic device is provided with a gyroscope sensor 1 and an acceleration sensor 1, a secondary screen of the electronic device is provided with a gyroscope sensor 2 and an acceleration sensor 2, and by reading output data of the gyroscope sensor 1, the acceleration sensor 1, the gyroscope sensor 2 and the acceleration sensor 2, the electronic device can acquire angular velocity of rotation of the main screen, acceleration of movement of the main screen, angular velocity of rotation of the secondary screen, and acceleration of movement of the secondary screen, and can also acquire gravitational acceleration of the main screen and the secondary screen, and parameters such as an included angle between the main screen and the secondary screen. For example, the main screen and the sub-screen of the electronic device may be respectively provided with multiple sets of touch devices, so that the grip posture of the user when using the electronic device may be determined. The specific arrangement and application method of the sensor in the electronic device are described in detail in the following embodiments.
The keys 190 include a power-on key, a volume key, and the like. The keys 190 may be mechanical keys. Or may be touch keys. The cellular phone 100 may receive a key input, and generate a key signal input related to user setting and function control of the cellular phone 100.
The motor 191 may generate a vibration cue. The motor 191 may be used for incoming call vibration cues, as well as for touch vibration feedback. For example, touch operations applied to different applications (e.g., photographing, audio playing, etc.) may correspond to different vibration feedback effects. The motor 191 may also respond to different vibration feedback effects for touch operations applied to different areas of the flexible screen 194. Different application scenes (such as time reminding, receiving information, alarm clock, game and the like) can also correspond to different vibration feedback effects. The touch vibration feedback effect may also support customization.
Indicator 192 may be an indicator light that may be used to indicate a state of charge, a change in charge, or a message, missed call, notification, etc.
The SIM card interface 195 is used to connect a SIM card. The SIM card can be attached to and detached from the cellular phone 100 by being inserted into the SIM card interface 195 or being pulled out from the SIM card interface 195. The handset 100 may support 1 or N SIM card interfaces, N being a positive integer greater than 1. The SIM card interface 195 may support a Nano SIM card, a Micro SIM card, a SIM card, etc. The same SIM card interface 195 can be inserted with multiple cards at the same time. The types of the plurality of cards may be the same or different. The SIM card interface 195 may also be compatible with different types of SIM cards. The SIM card interface 195 may also be compatible with external memory cards. The mobile phone 100 interacts with the network through the SIM card to implement functions such as communication and data communication. In some embodiments, the handset 100 employs esims, namely: an embedded SIM card. The eSIM card can be embedded in the mobile phone 100 and cannot be separated from the mobile phone 100.
The software system of the mobile phone 100 may adopt a layered architecture, an event-driven architecture, a micro-core architecture, a micro-service architecture, or a cloud architecture. The embodiment of the present application takes a layered architecture as an example, and exemplifies a software structure of the mobile phone 100. Fig. 4 is a block diagram of a software configuration of the mobile phone 100 according to the embodiment of the present application.
The layered architecture divides the software into several layers, each layer having a clear role and division of labor. The layers communicate with each other through a software interface. In some embodiments, the handset 100 may include an application layer, an application framework layer, a hardware abstraction layer, and a kernel layer.
The application layer may include a series of application packages.
As shown in fig. 4, the application layer may be installed with applications such as a camera, a flip screen, a gallery, a collaborative mode, a video, a large screen and a small screen switching, an intelligent on-off screen, and a screen opening, closing, and active effects.
The application framework layer provides an Application Programming Interface (API) and a programming framework for the application program of the application layer. The application framework layer includes a number of predefined functions.
As shown in FIG. 4, the application framework layer may include a window manager, a content provider, a screen cut manager, a resource manager, a notification manager, a view system, a sensor manager, and the like.
The window manager is used for managing window programs. The window manager can obtain the size of the display screen, judge whether a status bar exists, lock the screen, intercept the screen and the like.
The content provider is used to store and retrieve data and make it accessible to applications. The data may include video, images, audio, calls made and received, browsing history and bookmarks, phone books, etc.
The screen switching manager is used for managing screen switching actions. For example, it is determined whether or not to perform processing such as screen cut.
The view system includes visual controls such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The display interface may be composed of one or more views. For example, the display interface including the short message notification icon may include a view for displaying text and a view for displaying pictures.
The resource manager provides various resources for the application, such as localized strings, icons, pictures, layout files, video files, and the like.
The notification manager enables the application to display notification information in the status bar, can be used to convey notification-type messages, can disappear automatically after a short dwell, and does not require user interaction. Such as a notification manager used to inform download completion, message alerts, etc. The notification manager may also be a notification that appears in the form of a chart or scroll bar text at the top status bar of the system, such as a notification of a background running application, or a notification that appears on the screen in the form of a dialog window. For example, prompting text information in the status bar, sounding a prompt tone, vibrating the electronic device, flashing an indicator light, etc.
The sensor manager is used to store and process sensor-related data. For example, output data of each sensor, fused data of output data of a plurality of sensors, and the like are provided.
The hardware abstraction layer is an interface layer for the kernel layer and the application framework layer. The method is used for encapsulating the kernel driver, providing an interface upwards and shielding the implementation details of the bottom layer.
The kernel layer is a layer between hardware and software. The inner core layer at least comprises a display driver, a camera driver, an audio driver and a sensor driver.
The following will explain in detail a method for applying the multi-sensor provided by the embodiments of the present application to an electronic device having a flexible screen, by taking a mobile phone as an example of the electronic device, with reference to the accompanying drawings.
Generally, when a user does not use the mobile phone, the mobile phone is in a black screen state (screen-off state), that is, the flexible screen (including the main screen and the auxiliary screen) of the mobile phone is not lighted. The mobile phone can receive a first operation of a user and light a screen of the mobile phone in response to the first operation. The first operation may include pressing a power key by a user, unlocking the screen by using a fingerprint or face recognition or a password, double-clicking the screen by the user, calling, inserting an external device, unfolding the electronic device (the electronic device is unfolded from a folded state to a support state or an unfolded state), and the like.
And the mobile phone can be in different postures when receiving the first operation. For example, referring to fig. 5, the mobile phone can be in a plurality of postures when being in different physical forms. As shown in fig. 5, the postures of the mobile phone in the folded state may include a standing main screen up state, a standing auxiliary screen up state, a folded handheld state, and the like; the postures of the mobile phone in the support state can comprise a support holding state, a double-support transverse screen state, a double-support vertical screen state, a single-support main screen state, a single-support auxiliary screen state and the like; the gesture of the mobile phone in the unfolding state can comprise a large screen unfolding state and the like.
The mobile phone detects the first operation of the user in different gestures, and can light up the screen of the mobile phone in different lighting modes. For example, the lighting manner may include: illuminating the main screen, illuminating the sub-screen, or illuminating the large screen (main screen and sub-screen).
Referring to fig. 5, in some embodiments, the mobile phone detects a first operation of the user while the stationary main screen is in the upward state, and lights up the main screen in response to the first operation; when the mobile phone is in a standing secondary screen upward state, detecting a first operation of a user, and lighting the secondary screen in response to the first operation; when the mobile phone is in a folding handheld state, detecting a first operation of a user, and in response to the first operation, lighting up a screen (a main screen or an auxiliary screen) of the mobile phone facing the user; the method comprises the steps that a mobile phone detects a first operation of a user in a support holding state, a double-support transverse screen state, a double-support vertical screen state or a single-support main screen state, and the main screen is lightened in response to the first operation; the method comprises the steps that when the mobile phone is in a single-support secondary screen state, a first operation of a user is detected, and a secondary screen is lightened in response to the first operation; when the mobile phone is in a large screen unfolding state, a first operation of a user is detected, and the large screen is lightened in response to the first operation.
It should be understood that: references herein to a primary or secondary screen or an entire flexible screen towards a user include: the main screen or the auxiliary screen or the whole flexible screen faces the user at a basically parallel angle with the face of the user, and the main screen or the auxiliary screen or the whole flexible screen faces the user at a certain inclined angle. Further, references herein to a primary or secondary screen or an entire flexible screen facing upwards include: the main screen or the auxiliary screen or the whole flexible screen faces upwards at a horizontal angle, or the main screen or the auxiliary screen or the whole flexible screen faces upwards at a certain angle in the horizontal plane.
When the mobile phone detects the first operation of the user, the gesture of the mobile phone is determined, and the lighting mode is determined according to the gesture of the mobile phone.
In the embodiment of the application, the gesture of the mobile phone can be determined by utilizing multiple sensors. In some embodiments, multiple GYRO sensors (GYRO) and multiple acceleration sensors (ACC) may be disposed on the mobile phone. For example, as shown in fig. 6A, the main screen of the mobile phone is provided with a GYRO sensor GYRO1 and an acceleration sensor ACC1, and the sub-screen of the mobile phone is provided with a GYRO sensor GYRO2 and an acceleration sensor ACC 2. Of course, the position of the sensor in fig. 6A is only schematically illustrated, and in practical applications, the sensor is disposed on the main screen, which may be disposed on a circuit board in the mobile phone below the main screen; the sensor is arranged on the auxiliary screen, and can be arranged on a circuit board in the mobile phone below the auxiliary screen; alternatively, the sensor may be disposed at other locations corresponding to the flexible screen; this is not limited in the examples of the present application.
In fig. 6A, output data of GYRO1 is a component of angular velocity of main screen rotation in three axes (i.e., x, y, and z axes), output data of ACC1 is a component of acceleration of main screen rotation in three axes, output data of GYRO2 is a component of angular velocity of sub screen rotation in three axes, and output data of ACC2 is a component of acceleration of sub screen rotation in three axes.
Referring to fig. 6B, the hardware GYRO1, ACC1, GYRO2, and ACC2 report the measured main screen rotational angular velocity value, main screen acceleration value, sub-screen rotational angular velocity value, and sub-screen acceleration value to the sensor driver of the inner core layer, respectively. It will be appreciated that the angular velocity and acceleration values of the turn are vectors.
After the sensor drive acquires the original output data of the sensors on each screen, the gravity acceleration G1 of the main screen can be acquired through the output data of GYRO1 and ACC1, the gravity acceleration G2 of the auxiliary screen can be acquired through the output data of GYRO2 and ACC2, and the included angle alpha between the main screen and the auxiliary screen can be acquired through G1 and G2.
In one implementation, when the mobile phone is stationary in any posture, the measurement value of the acceleration sensor is the projection of the gravitational acceleration on three axes. When the mobile phone moves, the measurement value of the acceleration sensor is not only contributed by gravity, but also comprises linear acceleration brought by the movement; the acceleration of gravity under any attitude and any motion mode can be obtained by adopting an accelerometer and gyroscope (A + G) fusion algorithm.
For example, please refer to fig. 7, which is a schematic diagram of a six-axis a + G fusion algorithm. Wherein a is an acceleration raw signal measured by the acceleration sensor, and omega is a rotation angular velocity value measured by the gyroscope sensor. The core idea of the six-axis A + G fusion algorithm mainly has two points, the first point is that coordinate rotation average processing is carried out on an acceleration original signal measured by an acceleration sensor according to Rotational Invariance (RIOT), and a combined acceleration signal with the linear acceleration being preliminarily suppressed is obtained; after the acceleration raw signal is subjected to RIOT processing, the acceleration raw signal mainly consists of projections of gravity acceleration in three axial directions of a carrier coordinate system (b system). And secondly, taking the triaxial acceleration signal subjected to RIOT preprocessing as an observed quantity, combining a time updating equation, and further extracting the gravitational acceleration by using Kalman filtering. For a specific implementation of the six-axis a + G fusion algorithm, reference may be made to the description of the conventional technology, and details are not repeated here.
The sensor drive can adopt a six-axis A + G fusion algorithm, and the magnitude and the direction of the gravity acceleration G1 are obtained through the measurement values of GYRO1 and ACC 1; the magnitude and direction of the gravitational acceleration G2 are obtained from the measurement values of GYRO2 and ACC 2.
Further, the sensor drive also obtains the included angle alpha between the main screen and the auxiliary screen through the gravity acceleration G1 of the main screen and the gravity acceleration G2 of the auxiliary screen.
Referring to fig. 8A, corresponding coordinate systems may be respectively disposed on the main screen and the sub screen. For example, a coordinate system O1 may be provided in the main screen, with the X-axis of coordinate system O1 being parallel to the shorter side of the main screen, the Y-axis being parallel to the longer side of the main screen (fold line AB), and the Z-axis being directed out of the main screen perpendicular to the plane formed by the X-axis and the Y-axis. Similarly, a coordinate system O2 may be provided in the sub-screen, with the X-axis of coordinate system O2 being parallel to the shorter side of the sub-screen, the Y-axis being parallel to the longer side of the sub-screen (fold line AB), and the Z-axis being directed out of the sub-screen perpendicular to the plane formed by the X-axis and the Y-axis.
Referring to fig. 8B (a), the projections of G1 on three axes of the O1 coordinate system are (G)x1,Gy1,Gz1) The projections of G2 on the three axes of the O2 coordinate system are respectively (G)x2,Gy2,Gz2). The main screen and the auxiliary screen are on the same Y axis, so the gravity acceleration and the component of G2 on the Y axis are the same, namely the component of G1 and G2 on the XOZ plane of the respective coordinate system is the same; that is to say the projection G of G1 on the X1O1Z1 planexzEqual to the projection of G2 in the X2O2Z2 planeG’xz
The O1 coordinate system and the O2 coordinate system form an included angle, and relative rotation between the two-dimensional coordinate systems causes different components of the same vector in the two coordinate systems. Can be regarded as a vector rotating reversely by a corresponding angle in a fixed coordinate system, i.e. G in (B) of FIG. 8BxzTo G'xz. G in (B) of FIG. 8BxzTo G'xzThe angle of rotation is the angle alpha between the main screen and the secondary screen.
GxzTo G'xzThe angle of rotation can be calculated by the following formula:
α=arccos(Gxz*G’xz/|Gxz|*|G’xz|)=arcos((Gx1*Gx2+Gz1*Gz2)/|Gx1*Gx2|*|Gz1*Gz2|)。
and after the sensor drive of the kernel layer acquires an included angle alpha between the main screen and the auxiliary screen, uploading the alpha, the gravity acceleration value G1 of the main screen and the gravity acceleration value G2 of the auxiliary screen to a sensor service process of the hardware abstraction layer. The sensor service process of the hardware abstraction layer may pass alpha, G1 and G2 to the sensor manager of the application framework layer.
The sensor manager can acquire the posture of the mobile phone according to the included angle alpha between the main screen and the auxiliary screen, the gravity acceleration value G1 of the main screen and the gravity acceleration value G2 of the auxiliary screen.
Illustratively, the main-screen gravity acceleration value G1 has three axes of (G) in the O1 coordinate systemx1,Gy1,Gz1) The component of the gravity acceleration value G2 of the auxiliary screen on three axes of the O2 coordinate system is (G)x2,Gy2,Gz2) And the included angle between the main screen and the auxiliary screen is alpha.
If the mobile phone determines that the alpha is larger than a first threshold (for example 165 degrees) and smaller than or equal to 180 degrees, the mobile phone is determined to be in the unfolded state, and the posture of the mobile phone is in a large-screen unfolded state;
if the cellular phone determines that α is greater than or equal to 0 ° and less than or equal to a second threshold (e.g., 20 °), it may be determined that the cellular phone is in the folded state. Wherein, if 0 is determined<Gz1<=g,-g<=Gz2<0, determining the gesture of the mobile phone as a standing main screen upward state; if determining-g<=Gz1<0,0<Gz2<And g, determining the gesture of the mobile phone as a standing secondary screen upward state. For example, if G is determinedz1=g,Gz2=-g,Gx1=Gy1=Gx2=Gy2If the mobile phone is not in the state of 0, determining that the mobile phone is horizontally placed and is in a standing main screen upward state; if G is determinedz1=-g,Gz2=g,Gx1=Gy1=Gx2=Gy2And if the mobile phone is not in the state of 0, determining that the mobile phone is horizontally placed, and keeping the auxiliary screen in the upward state. g is the gravitational acceleration value of the earth.
The handset determines that alpha is greater than the second threshold (e.g., 20 deg.) and less than or equal to the first threshold (e.g., 165 deg.), then it may be determined that the handset is in the cradle state. Wherein, if 0 is determined<Gz1<=g,Gz2=-g,Gx2=Gy2If the mobile phone is in the single-support main screen state, determining the posture of the mobile phone to be in the single-support main screen state; if it is determined to be 0<Gz2<=g,Gz1=-g,Gx1=Gy1If the gesture of the mobile phone is 0, determining that the gesture of the mobile phone is in a single-support auxiliary screen state; if G is determinedy1=Gy2=0,0<Gz1<g,0<Gz2<g,-g<=Gx1<0,0<Gx2<g, determining the posture of the mobile phone to be a double-support horizontal screen state; if G is determinedx1=Gx2=Gz1=Gz2=0,Gy1=Gy2And g, determining the gesture of the mobile phone to be a double-support vertical screen state.
In some embodiments, the mobile phone can also determine the posture of the mobile phone by combining the included angle α between the main screen and the auxiliary screen, the gravity acceleration G1 of the main screen and the gravity acceleration G2 of the auxiliary screen with the holding posture of the user for holding the mobile phone.
For example, the mobile phone may determine, by using a preset grip algorithm, a grip gesture of the user gripping the mobile phone currently through the current touch position of the user reported by the touch device. Exemplarily, the mobile phone determines the touch areas of the user on the main screen and the auxiliary screen according to the coordinates of the touch points reported by the touch control device; if the touch area of the user on the main screen is larger than that on the auxiliary screen, the auxiliary screen of the mobile phone can be determined to face the user; if the touch area of the user on the main screen is smaller than the touch area on the auxiliary screen, the main screen of the mobile phone can be determined to face the user. The specific gripping algorithm can refer to the description in the conventional technology, and is not described in detail here. Of course, the handset may also incorporate other sensors to identify the particular orientation of the handset. For example, the mobile phone may turn on a camera to detect whether face information is captured while recognizing a user's grip gesture using a grip algorithm. This is not limited in this application.
For example, if the mobile phone determines that the mobile phone is in a folded state according to the value of α, and determines that the main screen of the mobile phone faces the user according to the gripping posture of the user, it may be determined that the posture of the mobile phone is in a folded handheld state and the main screen faces the user; the mobile phone determines that the mobile phone is in a folded state according to the value of alpha, and determines that the auxiliary screen of the mobile phone faces the user according to the grasping posture of the user, so that the posture of the mobile phone can be determined to be a folded handheld state and the auxiliary screen faces the user. The mobile phone determines that the mobile phone is in a support state according to the value of alpha, and determines that the main screen of the mobile phone faces the user according to the grasping posture of the user, so that the posture of the mobile phone can be determined to be a support holding state and the main screen faces the user; the mobile phone determines that the mobile phone is in the support state according to the value of alpha, and determines that the auxiliary screen of the mobile phone faces the user according to the grasping posture of the user, so that the posture of the mobile phone can be determined to be in the support holding state and the auxiliary screen faces the user.
And the sensor manager of the application program framework layer determines the posture of the mobile phone and can report the posture of the mobile phone to the application program of the application program layer. For example, the smart on-off screen application receives a first operation of the user, and may acquire a gesture of the mobile phone from a sensor manager of the application framework layer, and determine a mode of lighting the screen according to the gesture of the mobile phone.
It is understood that the above-mentioned software and hardware layers of the mobile phone and the division of each functional module are only exemplary illustrations. In actual implementation, other implementations are also possible. For example, after acquiring data reported by a sensor service of the hardware abstraction layer, a sensor manager of the application framework layer may transmit the acquired data to the gesture recognition module of the application framework layer. And the gesture recognition module determines the gesture of the mobile phone and reports the gesture of the mobile phone to the application program. This is not limited in the examples of the present application.
The implementation is shown in fig. 9. And the sensor devices respectively report the detected data to the sensor driver. The sensor-driven algorithm module respectively calculates the components of the output data fused by the ACC1 and the GYRO1 on three axes and the components of the output data fused by the ACC2 and the GYRO2 on three axes; and calculates the relative angle and attitude between the primary and secondary screens. Thus, the upper layer application can acquire the relative angle and posture between the main screen and the auxiliary screen.
In some embodiments, the sensor manager may also directly report the acquired data, such as the included angle between the main screen and the auxiliary screen, the gravitational acceleration of the main screen, the gravitational acceleration of the auxiliary screen, and the like, to the application layer. The application program of the application program layer can obtain and directly apply data such as an included angle between the main screen and the auxiliary screen, the gravity acceleration of the main screen, the gravity acceleration of the auxiliary screen and the like.
In some embodiments, the sensor manager may be according to Gx1,Gy1,Gz1Determining the attitude of the main screen according to Gx2,Gy2,Gz2Determining the posture of the auxiliary screen; and reporting the posture of the main screen and the posture of the auxiliary screen to an application program layer. The application program of the application program layer can obtain and directly apply data such as the posture of the main screen, the posture of the auxiliary screen and the like.
Illustratively, the sensor manager determines Gx1=Gy1=0,Gz1If the main screen is in the horizontal upward posture, the main screen is determined to be in the horizontal upward posture; sensor manager determination Gx2=Gy2=0,Gz2If the secondary screen is in the horizontal downward posture, the secondary screen is in the horizontal downward posture; therefore, the mobile phone can be determined to be in the standing main screen upward state.
Illustratively, the sensor manager determines Gx1=Gy1=0,Gz1If the main screen is in the horizontal upward posture, the main screen is determined to be in the horizontal upward posture; sensor manager determines-g<=Gx2<0,Gy2=0,0<Gz2<g, thenThe attitude of the sub-screen may be determined to be tilted upward.
According to the method for applying the multi-sensor to the electronic equipment with the flexible screen, the electronic equipment can be provided with the plurality of sensors, and therefore the posture of the electronic equipment is obtained. An application of the electronic device may perform a corresponding action according to the pose of the electronic device. For example, the smart on-off screen application may determine the manner in which the screen is turned on according to the gesture of the mobile phone. Compared with a single method for lighting a certain screen, the method for lighting the screen in the self-adaptive posture brings better experience to a user.
In practical application, the method for applying the multi-sensor to the electronic device with the flexible screen provided by the embodiment of the application can be applied to various application programs of a mobile phone. For example, the large screen and the small screen are switched, the screen is opened and closed, the effect is achieved, the collaborative mode is achieved, and the screen is turned over and switched.
In some embodiments, if the cell phone detects an operation of the user to unfold the flexible screen, an on-screen animation may be displayed on the flexible screen. If the mobile phone detects the operation of closing the flexible screen by the user, the screen closing action can be displayed on the flexible screen. For example, the mobile phone detects the operation of the user to expand the flexible screen, and the welcome word "good morning!is displayed on the flexible screen of the mobile phone! "; when the mobile phone detects the operation of closing the flexible screen by the user, the flexible screen of the mobile phone displays the bye! ".
For example, the sensors of the hardware layer report their respective measurement data to the kernel layer. The sensor drive of the inner core layer acquires fusion data such as an included angle between the main screen and the auxiliary screen, the gravity acceleration of the main screen, the gravity acceleration of the auxiliary screen and the like according to output data of each sensor; and reporting the acquired fusion data to an application program framework layer through a hardware abstraction layer. And the sensor manager of the application program framework layer determines the physical form and the posture of the mobile phone according to the acquired fusion data.
Furthermore, the sensor manager can also determine the change of the physical form of the mobile phone according to the physical form of the mobile phone; the change of the posture of the mobile phone can be determined according to the posture of the mobile phone.
For example, if the sensor manager determines that the mobile phone is changed from the folded state to the support state, or from the support state to the unfolded state, or from the folded state to the unfolded state, it determines that the operation of unfolding the flexible screen by the user is received; the sensor manager triggers the screen opening and closing movement effect of the application program layer to be used for displaying the screen opening and closing movement effect on the flexible screen. For example, if the sensor manager determines that the mobile phone is changed from the unfolded state to the support state, or from the support state to the folded state, or from the unfolded state to the folded state, it determines that the operation of closing the flexible screen by the user is received; the sensor manager triggers the screen opening and closing movement effect of the application program layer to be used for displaying the screen closing movement effect on the flexible screen.
In some embodiments, an electronic device with a flexible screen may operate in a collaborative mode. In the collaborative mode, the application program displayed on the main screen and the application program displayed on the auxiliary screen are simultaneously operated in the foreground. In the embodiment of the application, an application program displayed on a main screen is called a first application, and an application program displayed on an auxiliary screen is called a second application; it is to be understood that the application program displayed on the main screen may be referred to as a second application, and the application program displayed on the sub screen may be referred to as a first application.
The embodiment of the application provides a method for applying a multi-sensor to electronic equipment with a flexible screen, which is applied to a scene that the electronic equipment works in a collaborative mode.
In some embodiments, the first application displayed on the primary screen and the second application displayed on the secondary screen may simultaneously receive output data of a plurality of sensors disposed on the respective screens. For example, a first application displayed on the main screen may receive data such as an angular velocity, a gravitational acceleration, and an attitude of the main screen; and changes the display contents of the first application according to the data of the sensor on the home screen. Meanwhile, a second application displayed on the auxiliary screen can receive data such as the angular velocity, the gravitational acceleration, the posture and the like of the rotation of the auxiliary screen; and changes the display contents of the second application according to the data of the sensor on the sub-screen. In this way, the first application and the second application implement a cooperative work. Alternatively, the first application displayed on the main screen and the second application displayed on the sub-screen may be the same application program. It will be appreciated that in one implementation, the same application runs on the primary screen and the secondary screen, respectively, with the first application running on the primary screen and the second application running on the secondary screen. In another implementation, an application program is run on the electronic device, and the first application is displayed on the primary screen and the second application is displayed on the secondary screen. Alternatively, the first application and the second application may be different application programs.
In other embodiments, the first application displayed on the primary screen or the second application displayed on the secondary screen may receive output data of a plurality of sensors disposed on the primary screen and the secondary screen. For example, a first application displayed on the main screen (or a second application displayed on the sub-screen) may receive data of angular velocity, gravitational acceleration, attitude, and the like of rotation of the main screen and the sub-screen. Optionally, the first application displayed on the main screen may further receive display content of a second application displayed on the secondary screen, and the second application displayed on the secondary screen may further receive display content of the first application displayed on the main screen. In this way, the first application may change the display content of the first application according to the received data of the sensor on the sub-screen or the display content of the second application; the second application may change the display content of the second application according to the received data of the sensor on the home screen or the display content of the first application; the first application and the second application implement cooperative work. Alternatively, the first application displayed on the main screen and the second application displayed on the sub-screen may be the same application program. And may even be different interfaces of the same application. Alternatively, the first application and the second application may be different application programs.
Illustratively, as shown in fig. 10A, an application displayed on the main screen and an application displayed on the sub-screen are simultaneously foreground-run.
For example, when a plurality of people are in a video conference, one person uses the main screen to access a video conference application program displayed on the main screen; one person uses the auxiliary screen to access the video conference application program displayed on the auxiliary screen.
For another example, when a plurality of people play a game simultaneously, the main screen and the auxiliary screen respectively display the application program of the game, one person uses the main screen to control the game, and the other person uses the auxiliary screen to control the game.
The game application displayed on the main screen can receive first data of the sensor arranged on the main screen and change the display content of the game application displayed on the main screen according to the first data; meanwhile, the game application displayed on the sub-screen may receive second data of the sensor provided on the sub-screen and change display contents of the game application displayed on the sub-screen according to the second data. In some scenes, a gyroscope sensor and an acceleration sensor are respectively arranged on a main screen and an auxiliary screen, a game application displayed on the main screen can receive data of the gyroscope sensor and the acceleration sensor on the main screen, the posture change of the main screen is determined according to the data of the gyroscope sensor and the acceleration sensor on the main screen, and the display content of the game application displayed on the main screen is changed according to the posture change of the main screen; the game application displayed on the secondary screen can receive data of the gyroscope sensor and the acceleration sensor on the secondary screen, determine the posture change of the secondary screen according to the data of the gyroscope sensor and the acceleration sensor on the secondary screen, and change the display content of the game application displayed on the secondary screen according to the posture change of the secondary screen. In other scenes, the main screen and the auxiliary screen are respectively provided with a camera, the game application displayed on the main screen can receive data acquired by the camera on the main screen, the control action or control gesture and the like of the game application on the main screen by a user are determined according to the data acquired by the camera on the main screen, and the display content of the game application displayed on the main screen is changed according to the control action or control gesture of the game application on the main screen by the user; the game application displayed on the secondary screen can receive the data collected by the camera on the secondary screen, determine the control action or control gesture and the like of the game application on the secondary screen by the user according to the data collected by the camera on the secondary screen, and change the display content of the game application displayed on the secondary screen according to the control action or control gesture of the game application on the secondary screen by the user.
The game application displayed on the main screen can also receive second data of the sensor arranged on the auxiliary screen, or can also receive display content of the game application displayed on the auxiliary screen; the game application displayed on the main screen changes the display content of the game application displayed on the main screen according to the second data or the display content of the game application displayed on the sub-screen. The game application displayed on the secondary screen can also receive first data of a sensor arranged on the main screen, or can also receive display content of the game application displayed on the main screen; the game application displayed on the sub screen changes the display content of the game application displayed on the sub screen according to the first data or the display content of the game application displayed on the main screen. In some scenes, a gyroscope sensor and an acceleration sensor are respectively arranged on a main screen and an auxiliary screen, a game application displayed on the main screen can receive data of the gyroscope sensor and the acceleration sensor on the auxiliary screen, the posture change of the auxiliary screen is determined according to the data of the gyroscope sensor and the acceleration sensor on the auxiliary screen, and the display content of the game application displayed on the main screen is changed according to the posture change of the auxiliary screen; the game application displayed on the secondary screen can receive data of the gyroscope sensor and the acceleration sensor on the main screen, determine the posture change of the main screen according to the data of the gyroscope sensor and the acceleration sensor on the main screen, and change the display content of the game application displayed on the secondary screen according to the posture change of the main screen. In other scenes, the main screen and the auxiliary screen are respectively provided with a camera, the game application displayed on the main screen can receive data collected by the camera on the auxiliary screen, the control action or control gesture and the like of the game application on the auxiliary screen by a user are determined according to the data collected by the camera on the auxiliary screen, and the display content of the game application displayed on the main screen is changed according to the control action or control gesture of the game application on the auxiliary screen by the user; the game application displayed on the secondary screen can receive the data collected by the camera on the main screen, determine the control action or control gesture and the like of the game application on the main screen by the user according to the data collected by the camera on the main screen, and change the display content of the game application displayed on the secondary screen according to the control action or control gesture of the game application on the main screen by the user.
Illustratively, as shown in fig. 10B, a camera is mounted on the secondary screen of the electronic device. The user opens the photographing function to photograph the baby, and the main screen displays a photographing preview interface. In order to attract the attention of the baby, the cartoon image turtle is displayed on the auxiliary screen. And simultaneously foreground running of a first application displayed on the main screen and a second application displayed on the auxiliary screen, wherein the first application is a shooting application, and the second application is a video playing application or a picture displaying application. It is to be understood that the second application may also be a photographic-like application; for example, the secondary screen may display a photograph taken by the photographing function.
In one example, an ambient light sensor is disposed on the main screen, and the first application may receive data of the ambient light sensor disposed on the main screen and change a photo preview interface white balance according to the data of the ambient light sensor.
In one example, a gyro sensor and an acceleration sensor are disposed on the sub-screen, and the second application may receive data of the gyro sensor and the acceleration sensor disposed on the sub-screen, determine a change in the posture of the sub-screen according to the data of the gyro sensor and the acceleration sensor on the sub-screen, and change the contents of the picture displayed on the sub-screen according to the change in the posture of the sub-screen.
For example, as shown in fig. 10C, the first application displayed on the home screen is a payment-type application, and the second application is a picture display page in the first application. For example, the main screen displays a payment interface of the payment application, and the user clicks the displayed two-dimensional code picture, so that the two-dimensional code picture in the payment interface is automatically displayed on the auxiliary screen. Illustratively, when the mobile phone is in the folded state, the main screen displays the payment interface, and the auxiliary screen is in the screen-off state (i.e., the second application does not display its content). And if the mobile phone detects the second operation of the user, lightening the auxiliary screen, and displaying the two-dimensional code picture in the payment interface on the auxiliary screen. The user can click 'payment', namely the two-dimensional code picture is automatically displayed on the auxiliary screen. Therefore, the two-dimensional code picture can be conveniently displayed on the auxiliary screen, and the task of a user on the main screen is not influenced.
In one example, a gyroscope sensor is arranged on the main screen, and the second application can receive data of the gyroscope sensor arranged on the main screen and obtain the rotation angular speed of the main screen; and if the rotation angular speed of the main screen is determined to be greater than 0, determining that the second operation of the user is received (the second operation of the user is an operation of rotating the main screen).
In one example, a gyroscope sensor is arranged on the secondary screen, and the second application can receive data of the gyroscope sensor arranged on the secondary screen and obtain the rotation angular speed of the secondary screen; if it is determined that the rotational angular velocity of the sub-screen is greater than 0, it is determined that the second operation by the user is received (the second operation by the user is an operation of rotating the sub-screen).
In one example, a touch sensor is disposed on the home screen, the second application may receive an output of the touch sensor disposed on the home screen, and determine that a first gesture (such as a double-click operation) of the user on the home screen is received according to the output of the touch sensor, and then determine that a second operation of the user is received.
Each screen of the mobile phone can be respectively provided with a plurality of sensors. For example, at least one Gyroscope (GYRO) sensor, at least one Accelerometer (ACC), and/or at least one Magnetometer (MAG) may be disposed on the mobile phone. For example, the mobile phone may further include at least one of a proximity light sensor, an ambient light sensor, or a touch sensor. For example, as shown in fig. 11A, the main screen of the mobile phone is provided with a GYRO sensor GYRO1, an acceleration sensor ACC1 and a magnetometer MAG1, and the auxiliary screen of the mobile phone is provided with a GYRO sensor GYRO2, an acceleration sensor ACC2 and a magnetometer MAG 2. Of course, the position of the sensor in fig. 11A is only schematically illustrated, and in practical applications, the sensor is disposed on the main screen, which may be disposed on a circuit board in the mobile phone below the main screen; the sensor is arranged on the auxiliary screen, and can be arranged on a circuit board in the mobile phone below the auxiliary screen; alternatively, the sensor may be disposed at other locations corresponding to the flexible screen; this is not limited in the examples of the present application.
In fig. 11A, the output data of GYRO1 is the components of the angular velocity of the main screen rotation in three axes (i.e., x, y, and z axes); the output data of ACC1 is the components of the acceleration of the main screen in three axes, and the output data of MAG1 is magnetic flux 1; the output data of GYRO2 is the component of the angular velocity of the sub-screen rotation in three axes, the output data of ACC2 is the component of the sub-screen acceleration in three axes, and the output data of MAG2 is the magnetic flux 2.
Referring to fig. 11B, the hardware GYRO1, ACC1, MAG1, GYRO2, ACC2 and MAG2 respectively report the measured main screen rotational angular velocity value, main screen acceleration value, magnetic flux 1, sub-screen rotational angular velocity value, sub-screen acceleration value and magnetic flux 2 to the sensor driver of the inner core layer. It will be appreciated that the values of rotational angular velocity, acceleration and magnetic flux are all vectors, i.e. both magnitude and direction.
After the sensor drive acquires the original output data of the sensors on each screen, the gravity acceleration G1 of the main screen can be acquired through the output data of GYRO1 and ACC1, and the gravity acceleration G2 of the auxiliary screen can be acquired through the output data of GYRO2 and ACC 2. The method for acquiring the gravitational acceleration by the mobile phone through the acceleration value and the rotation angular velocity value may refer to the foregoing description, and is not described herein again.
The sensor driver reports the raw output data of the sensors on the screens and the fusion data (such as the gravity acceleration) acquired by the sensor driver according to the raw output data of the sensors to a sensor manager of an application framework layer through a sensor service process of a hardware abstraction layer.
An application at the application layer, such as an application operating in a collaborative mode, may obtain output data of sensors and fused data of the sensors from a sensor manager at the application framework layer.
In some embodiments, the applications displayed on both screens may receive the output data and the fused data of the respective on-screen sensors simultaneously and separately. For example, the application displayed on the main screen receives the gravitational acceleration of the main screen, and the application displayed on the auxiliary screen receives the gravitational acceleration of the auxiliary screen, so that the cooperative work of the main screen and the auxiliary screen is realized.
In other embodiments, the application displayed on both screens may receive the output data and the fused data of the sensors on both screens, respectively. For example, the application displayed on the main screen may receive the gravitational acceleration of the main screen and the gravitational acceleration of the auxiliary screen, and the application displayed on the auxiliary screen may receive the gravitational acceleration of the main screen and the gravitational acceleration of the auxiliary screen, thereby implementing cooperative work of the main screen and the auxiliary screen.
In one implementation, the sensor driver maintains a sensor index table, illustratively, as in table 1, including screen identification, sensor type, and sensor type values.
TABLE 1
Figure BDA0002140589410000211
The sensors on each screen of the handset can be uniformly numbered and represented by sensor type values. The application may determine a sensor based on the sensor type value. Illustratively, the flexible screen of the handset is folded into three screens: a first screen, a second screen, and a third screen. The first screen is provided with four sensors of ACC1, GYRO1, MAG1 and GRAVITY 1; the second screen is provided with four sensors of ACC2, GYRO2, MAG2 and GRAVITY 2; the third screen is provided with two sensors of ACC3 and GYRO 3. The screen identification of the first screen is the A screen, the screen identification of the second screen is the B screen, and the screen identification of the third screen is the C screen. As in table 1, the ACC sensor on the first screen is ACC1, its sensor type value is 1; the GYRO sensor on the first screen is GYRO1, and the sensor type value thereof is 2; the MAG sensor on the first screen is MAG1, and the sensor type value is 3; the GRAVITY sensor on the first screen is GRAVITY1, with a sensor type value of 4. The ACC sensor on the second screen is ACC2, with a sensor type value of 11; the GYRO sensor on the second screen is GYRO2, and the sensor type value is 12; the MAG sensor on the second screen is MAG2, and the sensor type value is 13; the GRAVITY sensor on the second screen is GRAVITY2 with a sensor type value of 14. The ACC sensor on the third screen is ACC3, with a sensor type value of 21; the GYRO sensor on the third screen is GYRO3, which has a sensor type value of 22.
It can be understood that the above-mentioned sensor index table is described by taking the flexible screen of the mobile phone as an example of being folded into three screens, and in practical applications, the flexible screen of the mobile phone may be folded into fewer or more screens, which is not limited in this embodiment of the application.
It should be noted that the sensor type may be a physical sensor, such as a gyroscope sensor, an acceleration sensor, a magnetometer, etc.; the sensor may also be a virtual sensor, for example, the sensor drives the GRAVITY acceleration of the fusion data calculated according to the output data of the gyroscope sensor and the acceleration sensor, and may be considered as the output data of the virtual sensor GRAVITY acceleration sensor, and the sensor type of the GRAVITY acceleration sensor is GRAVITY.
The sensor driver may provide a query interface that may obtain a sensor type value based on the screen identification and the sensor type. For example, the query interface function provided by the sensor driver is Get _ sensor _ type. Acquiring a sensor type value of an ACC on the A screen, calling a query interface function Get _ sensor _ type (A, ACC), wherein the return value of the query interface function is 1, and the sensor type value of the ACC on the A screen is 1; the data acquired from sensor No. 1 is the output data of the ACC on the a-screen. Acquiring a sensor type value of the GYRO on the B screen, calling a query interface function Get _ sensor _ type (B, GYRO), wherein the return value of the query interface function is 12, and the sensor type value of the GYRO on the B screen is 12; the data obtained from sensor No. 12 is the output data of GYRO on the B screen. Of course, the sensor index table may also be stored in the application framework layer, and the sensor manager in the application framework layer provides the query interface function. This is not limited in the embodiments of the present application.
The application may call the query interface function to obtain the sensor type value for the specified sensor, and thus, may receive the output data for the specified sensor from the correspondingly numbered sensor. For example, the application program may call Get _ sensor _ type (a, category) to obtain a sensor type value of the virtual sensor GRAVITY acceleration sensor on the main screen as 4; the output data of sensor No. 4 can be determined as the gravitational acceleration of the main screen.
Referring to fig. 11C, the sensor devices (including the virtual sensors) respectively report the output data to the cooperative mode application through the data channels. In one implementation, sensors (ACC1, GYRO1, MAG1, score 1) on the main screen report data through the Android native path; and the sensors on the auxiliary screen and other screens report data by adopting a newly added channel.
It should be noted that the above types of sensors disposed on the main screen and the sub-screen are only exemplary. In practical applications, the application program may implement cooperative work according to output data of various types of sensors disposed on the main screen or the sub-screen. The type of the sensor is not limited in the embodiments of the present application.
According to the method for applying the multi-sensor to the electronic equipment with the flexible screen, the application program can receive output data of the sensors on the main screen and the auxiliary screen. Thus, the application program displayed on the main screen and the application program displayed on the sub-screen can realize cooperative operation according to the output data of the sensors on the main screen and the sub-screen.
The embodiment of the application also provides a method for applying the multiple sensors to the electronic equipment with the flexible screen, and some sensors arranged on the mobile phone can not work in the screen-off state (the main screen and the auxiliary screen are not lightened) of the mobile phone. These sensors are activated if it is detected in the screen-off state that the primary and secondary screens are deployed.
For example, a mobile phone is provided with various sensors such as a gyroscope sensor, an acceleration sensor, and a magnetic sensor. The sensors with larger power consumption, such as a gyroscope sensor and an acceleration sensor, are set to be in a closed state (namely, are not in operation by default) by default in the screen-off state of the mobile phone. The magnetic sensor is set to an on state (operating state) by default.
In some embodiments, the mobile phone is provided with a magnetic sensor, and the magnetic sensor can sense the magnetic field of the magnetic object. For example, the magnetic sensor may be a hall sensor, magnetometer, or the like. For example, as shown in fig. 12, a magnetic sensor and a magnetic object are respectively disposed on two screens of a mobile phone, and the magnetic sensor and the magnetic object are disposed opposite to each other; the magnetic sensor or the magnetic object is not arranged on the rotating shaft. In one implementation, when the included angle between the main screen and the auxiliary screen is 0, the projections of the magnetic sensor and the magnetic object on the main screen are partially overlapped. The distance between the magnetic sensor and the object containing the magnetic is increased along with the increase of the included angle between the two screens, and the magnetic field intensity detected by the magnetic sensor is reduced along with the increase of the included angle between the two screens. Generally, the larger the overlapping area of the projection of the magnetic sensor and the magnetic-containing object on the main screen, the larger the magnetic field intensity detected by the magnetic sensor when the main screen and the sub-screen overlap. When the two screens of the mobile phone are folded together, the magnetic sensor is close to the magnetic object and can sense the magnetic field of the magnetic object. When the two screens of the mobile phone are unfolded, the magnetic sensor is far away from the magnetic object. Illustratively, the magnetic sensor is a magnetometer and the magnetic object is a speaker having magnetic properties. The magnetometer is arranged on the other screen opposite to the loudspeaker, and when the included angle between the main screen and the auxiliary screen is 0, the projection of the magnetometer and the loudspeaker on the main screen is partially overlapped. Optionally, when the included angle between the main screen and the auxiliary screen is 0, the projection of the magnetometer on the main screen is completely included in the projection of the speaker on the main screen. When the magnetometer is close to the loudspeaker, the magnetic flux of the magnetometer is increased; the magnetic flux of the magnetometer decreases when the magnetometer is away from the speaker.
The sum of the absolute values of the magnetic fluxes of the magnetometers on the three axes of the coordinate system is fabs (x) + fabs (y) + fabs (z); where fabs (X) is the absolute value of the magnetic flux of the magnetometer in the X axis, fabs (Y) is the absolute value of the magnetic flux of the magnetometer in the Y axis, and fabs (Z) is the absolute value of the magnetic flux of the magnetometer in the Z axis.
When the mobile phone is in a folded state and in a screen-off state, if the magnetic field change data detected by the magnetometer meets a preset condition, and the preset condition is that the sum of absolute values of magnetic fluxes on three axes of a coordinate system is smaller than a first threshold, it is determined that the main screen and the auxiliary screen are unfolded. Illustratively, the sum of the absolute values of the magnetic fluxes of the magnetometers on the three axes of the coordinate system is set as the first threshold when the angle between the main screen and the auxiliary screen is 20 °.
Illustratively, when the magnetometer detects that the sum of the absolute values of the magnetic fluxes of the magnetometer on the three axes of the coordinate system is smaller than a first threshold, the magnetometer reports the first value to the application program; the magnetometer reports a second value to the application program if it detects that the sum of the absolute values of its magnetic flux in the three axes of the coordinate system is greater than or equal to a first threshold. The application program determines that the first value is received in the folded state of the mobile phone, and then determines that the main screen and the auxiliary screen are unfolded.
In the screen-off state of the mobile phone, if the application program determines that the main screen and the auxiliary screen are unfolded, sensors which are set to be in a closed state by default, such as a gyroscope sensor and an acceleration sensor, are started, namely the gyroscope sensor and the acceleration sensor are set to be in an open state.
In some embodiments, in the screen-off state of the mobile phone, the mobile phone determines that the main screen and the auxiliary screen are unfolded, and may further illuminate the main screen and the auxiliary screen of the mobile phone.
According to the method for applying the multi-sensor to the electronic equipment with the flexible screen, when the mobile phone is in a folded state and is in a screen-off state, the magnetic sensor works normally, and other sensors do not work. When the magnetic sensor detects that the main screen and the auxiliary screen are unfolded, other sensors are started. Therefore, the power consumption of the mobile phone in the screen-off state can be reduced.
The electronic device with the flexible screen can display through the main screen or the auxiliary screen when the electronic device is in a folded state. If the user wants to switch from the current screen to another screen for display, the mobile phone can be turned over. When the mobile phone detects that the user turns over the mobile phone, the mobile phone can perform screen switching processing, namely, turning off the screen of the currently displayed content, lighting up another screen, and displaying the content on the other screen.
In some embodiments, the content displayed on the screen after the screen of the mobile phone is cut is the same as the content displayed on the screen before the screen is cut. Illustratively, as shown in fig. 13 (a), in the folded state, the main screen of the mobile phone displays a desktop. In response to the user's operation of turning the mobile phone over, the main screen of the mobile phone is turned off, the sub-screen is lit, and the desktop is displayed, as shown in fig. 13 (b).
In some embodiments, the content displayed on the screen after the screen of the mobile phone is cut is different from the content displayed on the screen before the screen is cut. Illustratively, a camera is arranged on the auxiliary screen of the mobile phone. As shown in fig. 14 (a), in the folded state of the mobile phone, the user takes a picture of an object (flower) in front of the mobile phone using the camera. Lightening a main screen, displaying a photographing preview picture, and displaying an image of a flower on the main screen; and (5) extinguishing the auxiliary screen. The user can overturn the mobile phone and use the camera to shoot by oneself. As shown in fig. 14 (b), in response to the user's operation of flipping the phone, the main screen of the phone is turned off, the sub-screen is turned on, a self-timer preview picture is displayed, and the sub-screen displays a self-timer image.
The embodiment of the application provides a method for applying a multi-sensor to electronic equipment with a flexible screen, when the electronic equipment is in a folded state, one screen (a main screen or an auxiliary screen) is turned on, and the other screen is turned off, the fact that the mobile phone receives the operation of turning over the mobile phone of a user can be determined according to output data of the sensor, and screen switching processing is carried out.
In some embodiments, the mobile phone determines that the physical form of the mobile phone is the folded state according to the output data of the sensor.
In one implementation, multiple sensors may be utilized to determine the pose of the handset. For example, a gyroscope sensor GYRO1 and an acceleration sensor ACC1 are arranged on the main screen of the mobile phone; a gyroscope sensor GYRO2 and an acceleration sensor ACC2 are arranged on the auxiliary screen of the mobile phone. The mobile phone can determine the posture of the mobile phone according to the output data of the GYRO1, ACC1, GYRO2 and ACC 2; if it is determined that the angle α between the main screen and the sub screen is greater than or equal to 0 ° and less than or equal to a second threshold (e.g., 20 °), it is determined that the cellular phone is in the folded state. For a specific implementation, reference may be made to the description of the foregoing embodiments, which are not described herein again.
In another implementation, the mobile phone may be determined to be in a folded state using a magnetic sensor. For example, a magnetic sensor and a magnetic object are respectively arranged on two screens of a mobile phone, and the magnetic sensor and the magnetic object are oppositely arranged; the magnetic sensor or the magnetic object is not arranged on the rotating shaft. When the two screens of the mobile phone are folded together, the magnetic sensor is close to the magnetic object and can sense the magnetic field of the magnetic object. When the two screens of the mobile phone are unfolded, the magnetic sensor is far away from the magnetic object. Illustratively, the magnetic sensor is a magnetometer and the magnetic object is a speaker having magnetic properties. When the magnetometer is close to the loudspeaker, the magnetic flux of the magnetometer becomes large; when the magnetometer is far away from the speaker, the magnetic flux of the magnetometer becomes small. Illustratively, the positions of the magnetic sensor and the magnetic containing object are shown in FIG. 12.
The sum of the absolute values of the magnetic fluxes of the magnetometers on the three axes of the coordinate system is fabs (x) + fabs (y) + fabs (z); where fabs (X) is the absolute value of the magnetic flux of the magnetometer in the X axis, fabs (Y) is the absolute value of the magnetic flux of the magnetometer in the Y axis, and fabs (Z) is the absolute value of the magnetic flux of the magnetometer in the Z axis.
If the sum of the absolute values of the magnetic fluxes of the magnetometers on the three axes of the coordinate system is determined to be greater than or equal to a first threshold, the mobile phone is determined to be in the folded state. Illustratively, the sum of the absolute values of the magnetic fluxes of the magnetometers on the three axes of the coordinate system is set as the first threshold when the angle between the main screen and the auxiliary screen is 20 °.
In some embodiments, the mobile phone determines that the user's operation of turning the mobile phone is received according to the output data of the sensor.
For example, as shown in fig. 15, the operation of the user to turn the mobile phone may include one-handed turning, two-handed turning, hand-change turning, and the like.
If the mobile phone detects that the mobile phone is rotated along one axis by an angle greater than a first angle (e.g., 120 °), it is determined that the user's operation of flipping the mobile phone is received. For example, the phone may be flipped along an axis (Y-axis) parallel to the long sides of the phone, as illustrated in fig. 16A and 16B. For example, the phone may be flipped along an axis (X-axis) parallel to the short side of the phone, as illustrated in fig. 16C and 16D.
In one implementation, the handset can determine the angle at which the handset is rotated along one axis from the output data of the gyro sensor. For example, the gyro sensor and a screen of the mobile phone share a coordinate system, and the gyro sensor is mounted on the main screen as an example. And if the component of the integral of the output data of the gyroscope sensor in the X-axis direction or the Y-axis direction in the first time length is determined to be larger than the first angle, determining that the angle of the mobile phone rotating along the X-axis direction or the Y-axis direction is larger than the first angle, and determining that the operation of turning the mobile phone by the user is received.
Illustratively, the output data of the gyro sensor is a rotational angular velocity ω. And the integral omega of omega in the time length s is the rotation angle of the mobile phone in the time length s. For example omegaxIs the component of Ω in the X axis, Ωx=0;ΩyIs the component of Ω in the Y axis, Ωy>0;ΩzIs the component of Ω in the Z axis, Ω z0; then the mobile phone edge is determinedRotating along the direction of the Y axis. If omegayAnd if the angle is larger than the first angle, determining that the mobile phone is overturned along the Y-axis direction.
In some embodiments, after the mobile phone determines that the operation of turning over the mobile phone by the user is received, whether to execute the screen cutting action is determined according to the holding gesture of the user holding the mobile phone. Under the condition that the first screen is lightened, after the mobile phone receives the operation of turning over the mobile phone by the user, if the grasping gesture of the user is determined that the first screen faces to the palm, the operation of turning over the mobile phone by the user is responded, the screen switching action is executed, and the second screen is lightened; and if the user's gripping posture is determined that the second screen faces the palm, the screen cutting action is not executed, and the first screen is kept lighted.
For example, as shown in fig. 15, when the first screen (main screen) is lit, the mobile phone receives an operation of turning over the mobile phone by the user, after the mobile phone is turned over, the user has a grip gesture that the first screen (main screen) faces the palm, and the mobile phone performs a screen switching action and lights the second screen (auxiliary screen) in response to the operation of turning over the mobile phone by the user.
For example, as shown in fig. 17, a user views a picture on the home screen of the mobile phone, then turns the mobile phone over, and displays the home screen to other people; the mobile phone does not execute the screen switching action. When the first screen (main screen) is lightened, the mobile phone receives the operation of turning over the mobile phone by a user, after the mobile phone is turned over, the holding posture of the user is that the second screen (auxiliary screen) faces a palm, the mobile phone does not execute a screen cutting action, and the first screen (main screen) is kept lightened.
In an implementation manner, the mobile phone may determine, by using a preset gripping algorithm, a gripping gesture of the user gripping the mobile phone currently through the current touch position of the user reported by the touch device. Exemplarily, the mobile phone determines the touch areas of the user on the main screen and the auxiliary screen according to the coordinates of the touch points reported by the touch control device; if the touch area of the user on the main screen is larger than that on the auxiliary screen, the main screen of the mobile phone can be determined to face the palm; if the touch area of the user on the main screen is smaller than that on the auxiliary screen, the auxiliary screen of the mobile phone can be determined to face the palm. The specific gripping algorithm can refer to the description in the conventional technology, and is not described in detail here. Of course, the handset may also incorporate other sensors to identify the particular orientation of the handset. For example, the mobile phone may turn on a camera to detect whether face information is captured while recognizing a user's grip gesture using a grip algorithm. This is not limited in this application.
In some embodiments, the mobile phone determines that an operation of turning over the mobile phone by a user is received, and determines whether to execute a screen-cutting action according to the gesture of the mobile phone. For example, when the mobile phone is in a standing main screen upward state, the operation of turning over the mobile phone by the user is received; and if the gesture of the mobile phone is determined to be the standing secondary screen upward state, responding to the operation of turning over the mobile phone by the user, executing the screen switching action, and lightening the secondary screen. For example, when the mobile phone is in a standing state with the auxiliary screen facing upwards, the operation of turning over the mobile phone by the user is received; and if the gesture of the mobile phone is determined to be the standing main screen upward state, responding to the operation of turning over the mobile phone by the user, executing the screen switching action, and lightening the main screen. The method for determining the posture of the mobile phone according to the output data of the sensor can refer to the foregoing embodiments, and details are not repeated here.
In some embodiments, there are instances where the user turns the handset multiple times in succession. For example, if it is determined that the mobile phone is rotated by 360 ° in the X-axis direction or the Y-axis direction, the mobile phone is turned twice in succession. In one implementation mode, the mobile phone does not judge whether to perform a screen switching action in the process of turning over the mobile phone by a user; and after the operation of turning the mobile phone by the user is stopped, determining whether to execute a screen switching action. For example, the handset may determine the rotational speed of the handset based on the output data of the gyro sensor. If the rotation angle omega/s in the unit time length of the mobile phone is larger than zero, the mobile phone is determined to be in the overturning process; and if the rotation angle omega/s in the unit time length of the mobile phone is equal to zero, determining that the turning process of the mobile phone is stopped.
In some embodiments, during the process of turning over and switching the screen of the mobile phone, a screen switching error may occur. If the mobile phone determines that the current lighted screen is wrong, the screen switching action can be executed; thus, the screen-cutting error can be corrected.
In one implementation, the handset records the currently illuminated screen (primary or secondary). Illustratively, the currently lighted screen is recorded by using a first recorded value, for example, a first recorded value of 1 indicates that the main screen is lighted, and a first recorded value of 0 indicates that the auxiliary screen is lighted. When the mobile phone initially lights up a screen in a folded state, initializing a first record value according to the initially lighted screen; and updating the first record value every time the mobile phone executes the screen switching action.
And after the mobile phone executes the screen switching action each time, carrying out mistaken screen switching judgment. For example, after the mobile phone performs the screen-cutting action each time, determining a gripping posture (the main screen faces the palm or the sub-screen faces the palm) of the user currently gripping the mobile phone; and determines a currently lighted screen (main screen or sub-screen) according to the first recorded value. And if the main screen is determined to face the palm according to the current grasping posture of the user grasping the mobile phone, and the currently lighted screen is determined to be the main screen according to the first record value, executing a screen switching action, turning off the main screen, and lighting up the auxiliary screen. And if the secondary screen is determined to face the palm according to the current grasping posture of the user grasping the mobile phone, and the currently lighted screen is determined to be the secondary screen according to the first record value, executing a screen switching action, turning off the secondary screen, and lighting up the main screen.
For example, the flow of the handset determining whether to perform the screen-cutting action is shown in fig. 18. And the mobile phone posture recognition module determines the posture of the mobile phone according to the output data of the ACC and the GYRO. The gripping gesture recognition module determines a gripping gesture of the user gripping the mobile phone according to the output data of the TP. And the turnover identification module determines whether the operation of turning over the mobile phone by the user is received or not according to the output data of the GYRO. And the data fusion module determines whether to execute the screen switching action according to the recognition results of the mobile phone gesture recognition module, the gripping gesture recognition module and the turning recognition module.
For example, referring to fig. 19, the sensor service module of the application framework layer of the mobile phone may determine data such as the posture of the mobile phone, the rotation angle of the mobile phone, and the like according to the output data of the sensor. And then sending the fused data to an intelligent turnover detection algorithm module of a Hardware Abstraction Layer (HAL), wherein the intelligent turnover detection algorithm module can determine whether the mobile phone is turned over. The HAL is provided with a false touch prevention algorithm module, and the false touch prevention algorithm module determines whether to execute a screen switching action according to a detection result of the intelligent turnover detection algorithm module and a grasping posture determined by the TP process. And if the screen switching action is determined to be executed, informing the screen switching module to switch the screen through the input system. Thus, one screen switching operation is completed. Further, after the screen switching module executes the screen switching action, the state and command module can inform the adjustment of the TP characteristic value of the hardware layer to perform reference calibration; to improve the accuracy of TP output data.
According to the method for applying the multi-sensor to the electronic equipment with the flexible screen, when the electronic equipment is in a folded state, one screen (a main screen or an auxiliary screen) is turned on, and the other screen is turned off, the fact that the mobile phone receives the operation of turning over the mobile phone by a user can be determined according to output data of the sensor, and whether the screen cutting action is executed or not is judged according to the holding gesture of the user for holding the mobile phone or the gesture of the mobile phone. And the wrong screen cutting judgment can be carried out by combining the grasping gesture of the user grasping the mobile phone, and the wrong screen cutting is corrected. The accuracy of switching the screen when the user turns the mobile phone is improved.
It is understood that the electronic device includes hardware structures and/or software modules for performing the functions in order to realize the functions. Those of skill in the art would appreciate that the various illustrative components and algorithm steps described in connection with the embodiments disclosed herein may be implemented as hardware or combinations of hardware and computer software. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the embodiments of the present application.
An embodiment of the present application further provides an electronic device, which includes a processor, and a memory, an input device, and an output device connected to the processor. The input device and the output device may be integrated into one device, for example, the touch device of the flexible screen may be used as the input device, and the display of the flexible screen may be used as the output device.
At this time, as shown in fig. 20, the electronic device may include: a flexible screen 2001, the flexible screen 2001 comprising a touch device 2006 and a display 2007; one or more processors 2002; one or more memories 2003; one or more sensors 2008; one or more application programs (not shown); and one or more computer programs 2004, which may be connected via one or more communication buses 2005. Wherein the one or more computer programs 2004 are stored in the memory 2003 and configured to be executed by the one or more processors 2002, the one or more computer programs 2004 comprising instructions that may be used to perform the steps of the embodiments described above. All relevant contents of the steps related to the above method embodiment may be referred to the functional description of the corresponding entity device, and are not described herein again.
For example, the processor 2002 may be specifically the processor 110 shown in fig. 3, the memory 2003 may be specifically the internal memory 121 and/or the external memory shown in fig. 3, the flexible screen 2001 may be specifically the flexible screen 194 shown in fig. 3, and the sensor 2008 may be specifically one or more of the gyroscope sensor 180C, the acceleration sensor 180D, and the magnetic sensor 180E in the sensor module 180 shown in fig. 3, and may also be one or more of the touch device 180F, the proximity light sensor 180H, the ambient light sensor 180M, and the touch sensor 180L, which is not limited in this embodiment of the present application.
The embodiment of the present application further provides a computer storage medium, where a computer program code is stored in the computer storage medium, and when the processor executes the computer program code, the electronic device executes the relevant method steps executed by the electronic device in the steps related to the above method embodiments.
The embodiment of the present application further provides a computer program product, which, when running on a computer, causes the computer to execute the relevant method steps executed by the electronic device in the steps related to the above method embodiments.
In addition, the electronic device, the computer storage medium, or the computer program product provided in the embodiments of the present application are all configured to execute the corresponding method provided above, and therefore, the beneficial effects achieved by the electronic device, the computer storage medium, or the computer program product may refer to the beneficial effects in the corresponding method provided above, and are not described herein again.
Through the above description of the embodiments, it is clear to those skilled in the art that, for convenience and simplicity of description, the foregoing division of the functional modules is merely used as an example, and in practical applications, the above function distribution may be completed by different functional modules according to needs, that is, the internal structure of the device may be divided into different functional modules to complete all or part of the above described functions.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described device embodiments are merely illustrative, and for example, the division of the modules or units is only one logical functional division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another device, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
In addition, functional units in the embodiments of the present application may be integrated into one processor, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present application may be essentially or partially contributed to by the prior art, or all or part of the technical solutions may be embodied in the form of a software product, where the software product is stored in a storage medium and includes several instructions to enable a device (which may be a single chip, a chip, or the like) or a processor (processor) to execute all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a U disk, a removable hard disk, a ROM, a magnetic disk, or an optical disk.
The above description is only an embodiment of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions within the technical scope of the present disclosure should be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (9)

1. A method for applying multiple sensors to an electronic device with a flexible screen, wherein the flexible screen at least comprises a first screen and a second screen, the first screen and the second screen are respectively provided with at least one sensor, the sensors comprise a first sensor and a second sensor, the first sensor is a magnetometer, the magnetometer is arranged on the first screen, and a magnetic object is arranged on the second screen, the method comprises the following steps:
when a first operation of a user is detected, determining the posture of electronic equipment, and determining a lighting mode according to the posture of the electronic equipment;
receiving, by a first application displayed on the first screen, first data of a sensor provided on the first screen after the electronic device is lighted up,
the first application changes the display content of the first application according to the first data;
a second application displayed on the second screen receives second data of a sensor provided on the second screen;
the second application changes the display content of the second application according to the second data;
wherein; the first application and the second application are simultaneously operated in the foreground;
when the electronic equipment is in a screen-off state, if the magnetic field change data detected by the first sensor is smaller than a first threshold, the electronic equipment determines that the first screen and the second screen are unfolded; when the electronic equipment is in a screen-off state, the first sensor is in an open state, and the second sensor is in a closed state; the magnetic field change data is the sum of the absolute values of magnetic fluxes on three axes of a coordinate system;
the electronic device sets the second sensor to an open state in response to the first screen and the second screen being unfolded.
2. The method of claim 1, further comprising:
the first application receives the second data or display content of the second application;
the second application receives the first data or the display content of the first application;
the first application changes the display content of the first application according to the received second data or the display content of the second application;
and the second application changes the display content of the second application according to the received first data or the display content of the first application.
3. The method according to claim 1 or 2,
the first application and the second application are the same application.
4. The method according to claim 1 or 2,
the first application and the second application are game-type applications.
5. The method according to claim 1 or 2,
the first application is a shooting application, and the second application is a video playing application or a picture displaying application.
6. The method according to claim 1 or 2,
the first application is a payment application, and the second application is a picture display page in the first application.
7. The method of claim 1 or 2, wherein the sensor comprises at least one of a gyroscope sensor, an acceleration sensor, a magnetometer, an ambient light sensor, or a proximity light sensor.
8. An electronic device, comprising:
the flexible screen at least comprises a first screen and a second screen, and the first screen and the second screen are respectively provided with at least one sensor;
one or more processors;
one or more memories;
and one or more computer programs, wherein the one or more computer programs are stored in the one or more memories, the one or more computer programs comprising instructions, which when executed by the electronic device, cause the electronic device to perform the method of any of claims 1-7.
9. A computer-readable storage medium having instructions stored therein, which when run on an electronic device, cause the electronic device to perform the method of any of claims 1-7.
CN201910667547.XA 2019-07-23 2019-07-23 Method for applying multiple sensors to electronic equipment with flexible screen and electronic equipment Active CN110536004B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910667547.XA CN110536004B (en) 2019-07-23 2019-07-23 Method for applying multiple sensors to electronic equipment with flexible screen and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910667547.XA CN110536004B (en) 2019-07-23 2019-07-23 Method for applying multiple sensors to electronic equipment with flexible screen and electronic equipment

Publications (2)

Publication Number Publication Date
CN110536004A CN110536004A (en) 2019-12-03
CN110536004B true CN110536004B (en) 2021-08-31

Family

ID=68660674

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910667547.XA Active CN110536004B (en) 2019-07-23 2019-07-23 Method for applying multiple sensors to electronic equipment with flexible screen and electronic equipment

Country Status (1)

Country Link
CN (1) CN110536004B (en)

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111262975B (en) * 2020-01-08 2021-06-08 华为技术有限公司 Bright screen control method, electronic device, computer-readable storage medium, and program product
CN113542453A (en) * 2020-03-31 2021-10-22 北京小米移动软件有限公司 Folding screen terminal device, state detection method thereof and electronic device
CN111610821A (en) * 2020-04-01 2020-09-01 联想(北京)有限公司 Electronic equipment
CN111651441B (en) * 2020-05-11 2023-05-09 北京小米移动软件有限公司 Data processing method and device and computer storage medium
CN113703519A (en) * 2020-05-21 2021-11-26 北京小米移动软件有限公司 Method and device for determining posture of folding screen and storage medium
WO2022075490A1 (en) * 2020-10-05 2022-04-14 엘지전자 주식회사 Mobile terminal and control method thereof
CN112596600A (en) * 2020-12-16 2021-04-02 惠州Tcl移动通信有限公司 Screen unlocking method and device, storage medium and mobile terminal
CN114489534B (en) * 2021-07-23 2023-07-14 荣耀终端有限公司 Display switching method of foldable electronic equipment, electronic equipment and medium
CN116781809A (en) * 2021-11-19 2023-09-19 荣耀终端有限公司 Hinge angle detection method and related equipment
CN116399283B (en) * 2021-11-19 2023-10-24 荣耀终端有限公司 Hinge angle detection method and related equipment
CN116366750B (en) * 2021-12-28 2024-04-16 荣耀终端有限公司 Method for determining included angle of folding screen and related equipment thereof
CN116033051B (en) * 2022-08-10 2023-10-20 荣耀终端有限公司 Folding angle detection method and device for folding screen and readable storage medium
CN115113747B (en) * 2022-08-22 2023-04-07 荣耀终端有限公司 Touch pen using method and system and touch pen
CN116048246B (en) * 2022-08-25 2023-11-10 荣耀终端有限公司 Display assembly and control method of display device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107632895A (en) * 2017-08-31 2018-01-26 维沃移动通信有限公司 A kind of information sharing method and mobile terminal
CN108227897A (en) * 2017-11-29 2018-06-29 努比亚技术有限公司 Control method for screen display, flexible screen terminal and computer readable storage medium
CN108509780A (en) * 2018-03-02 2018-09-07 陈参 A kind of good mobile terminal of security performance

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103399684A (en) * 2013-07-03 2013-11-20 惠州Tcl移动通信有限公司 Display screen with size changeable, mobile terminal and realizing method of mobile terminal
WO2017031713A1 (en) * 2015-08-26 2017-03-02 Hewlett-Packard Development Company, L.P. Display unit with a base
CN107656666B (en) * 2016-07-26 2020-09-25 北京小米移动软件有限公司 Mobile terminal and scrolling speed determination method
US11429198B2 (en) * 2016-11-30 2022-08-30 Huawei Technologies Co., Ltd. Terminal device control method and terminal device
CN108196773A (en) * 2017-11-30 2018-06-22 努比亚技术有限公司 Control method, terminal and the computer readable storage medium of flexible screen terminal
CN108228032A (en) * 2018-01-26 2018-06-29 维沃移动通信有限公司 The control method and mobile terminal of a kind of display screen
CN108769382B (en) * 2018-04-28 2021-05-21 努比亚技术有限公司 Message display method, wearable device and computer readable storage medium
CN109491541B (en) * 2018-10-31 2021-09-03 维沃移动通信有限公司 Control method and mobile terminal
CN109947319A (en) * 2019-03-15 2019-06-28 Oppo广东移动通信有限公司 The management method of application program, device and electronic equipment in electronic equipment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107632895A (en) * 2017-08-31 2018-01-26 维沃移动通信有限公司 A kind of information sharing method and mobile terminal
CN108227897A (en) * 2017-11-29 2018-06-29 努比亚技术有限公司 Control method for screen display, flexible screen terminal and computer readable storage medium
CN108509780A (en) * 2018-03-02 2018-09-07 陈参 A kind of good mobile terminal of security performance

Also Published As

Publication number Publication date
CN110536004A (en) 2019-12-03

Similar Documents

Publication Publication Date Title
CN110536004B (en) Method for applying multiple sensors to electronic equipment with flexible screen and electronic equipment
EP4084450B1 (en) Display method for foldable screen, and related apparatus
CN111949345B (en) Application display method and electronic equipment
CN109981839B9 (en) Display method of electronic equipment with flexible screen and electronic equipment
CN110119295B (en) Display control method and related device
CN112217923B (en) Display method of flexible screen and terminal
CN111669459B (en) Keyboard display method, electronic device and computer readable storage medium
CN110798568B (en) Display control method of electronic equipment with folding screen and electronic equipment
CN112671976B (en) Control method and device of electronic equipment, electronic equipment and storage medium
CN112600961A (en) Volume adjusting method and electronic equipment
CN112860359A (en) Display method and related device of folding screen
CN112506386A (en) Display method of folding screen and electronic equipment
CN112751954B (en) Operation prompting method and electronic equipment
CN110633043A (en) Split screen processing method and terminal equipment
CN114125130B (en) Method for controlling communication service state, terminal device and readable storage medium
CN111602108A (en) Application icon display method and terminal
CN113452945A (en) Method and device for sharing application interface, electronic equipment and readable storage medium
CN112583957A (en) Display method of electronic device, electronic device and computer-readable storage medium
CN113010076A (en) Display element display method and electronic equipment
CN113641271A (en) Application window management method, terminal device and computer readable storage medium
CN114253349A (en) Folding equipment and opening and closing control method thereof
CN110058729B (en) Method and electronic device for adjusting sensitivity of touch detection
CN112449101A (en) Shooting method and electronic equipment
CN112584037B (en) Method for saving image and electronic equipment
CN112241194B (en) Folding screen lighting method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant