CN114125148A - Control method of electronic equipment operation mode, electronic equipment and readable storage medium - Google Patents

Control method of electronic equipment operation mode, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN114125148A
CN114125148A CN202210025952.3A CN202210025952A CN114125148A CN 114125148 A CN114125148 A CN 114125148A CN 202210025952 A CN202210025952 A CN 202210025952A CN 114125148 A CN114125148 A CN 114125148A
Authority
CN
China
Prior art keywords
electronic device
electronic equipment
processor
proximity
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210025952.3A
Other languages
Chinese (zh)
Other versions
CN114125148B (en
Inventor
王石磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202210025952.3A priority Critical patent/CN114125148B/en
Publication of CN114125148A publication Critical patent/CN114125148A/en
Application granted granted Critical
Publication of CN114125148B publication Critical patent/CN114125148B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72448User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions
    • H04M1/72454User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions according to context-related or environment-related conditions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72448User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions
    • H04M1/72463User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions to restrict the functionality of the device

Abstract

The embodiment of the application provides a control method of an electronic device operation mode, an electronic device and a computer readable storage medium. The control method of the running mode of the electronic equipment comprises the following steps: when the electronic equipment determines that the object detected by the proximity light sensor is within a set range from the electronic equipment and the electronic equipment is in a preset state, acquiring data through a front-facing camera device; and when the data is determined to comprise the key information of the face, controlling the electronic equipment not to enter a false touch prevention mode, wherein the preset state comprises a call state and the gesture is a use gesture, or displaying a screen locking interface on the display screen. It can be seen that: under display screen display lock screen interface or conversation scene, the proximity light sensor determines that there is the object to be close to, and the data that acquires through leading camera device include face key information, can deduce that the user is using electronic equipment, and what the proximity light sensor detected has the object to be close to and belongs to the wrong report that is disturbed, and control electronic equipment does not get into and prevents the mistake and touch the mode, avoids unable normal use.

Description

Control method of electronic equipment operation mode, electronic equipment and readable storage medium
Technical Field
The present application relates to the field of communications technologies, and in particular, to a method for controlling an operation mode of an electronic device, and a computer-readable storage medium.
Background
Currently, an electronic apparatus is provided with an access light sensor for detecting whether an object is approaching the electronic apparatus. In a call scene or a scene showing a screen locking interface, the electronic device can determine whether to enter a false touch prevention mode according to a detection result of the proximity optical sensor. Under the general condition, the approach optical sensor detects that an object approaches, the electronic device can enter a false touch prevention mode, for example, the display screen is controlled to be turned off, so that the situation that the screen is mistakenly operated due to the fact that a user carelessly touches the screen can be avoided.
However, in the actual use process of the electronic device, the proximity light sensor has the problem that the detection result is inaccurate due to interference. For example: the electronic equipment can be placed in a waterproof bag for use, so that the damage of water to the electronic equipment is avoided. However, after the waterproof bag is used for a long time, the surface of the waterproof bag is worn and becomes incompletely transparent, and the approaching optical sensor mistakenly takes the waterproof bag as an object to approach, so that a detection result that the object approaches is obtained. In a call scene or a scene of displaying a screen locking interface, the electronic equipment mistakenly enters the mistaken touch prevention mode according to a detection result of the proximity optical sensor, and normal use of the electronic equipment is influenced.
Disclosure of Invention
The application provides a control method of an electronic device operation mode, an electronic device, a computer program product and a computer readable storage medium, and aims to solve the problem that the normal use of the electronic device is influenced due to the fact that a proximity optical sensor is interfered to cause inaccurate detection results and cause the electronic device to enter a false touch prevention mode.
In order to achieve the above object, the present application provides the following technical solutions:
in a first aspect, the present application provides a method for controlling an operation mode of an electronic device, which is applicable to an electronic device including a proximity light sensor, a front-facing camera, and a display screen. The application provides a method for controlling an operation mode of an electronic device, which comprises the following steps: the electronic equipment detects an object through the proximity light sensor, and when the object detected by the proximity light sensor is determined to be within a set range from the electronic equipment and the electronic equipment is in a preset state, data are acquired through the front-facing camera device; and when the data is determined to comprise the key information of the human face, controlling the electronic equipment not to enter a false touch prevention mode, wherein the preset state comprises the following steps: the electronic equipment is in a call state, and the posture of the electronic equipment is a use posture, wherein the use posture can be understood as a posture that a user holds the electronic equipment to be close to ears, or a display screen of the electronic equipment presents a screen locking interface.
From the above, it can be seen that: be in display screen display lock screen interface at electronic equipment, perhaps electronic equipment is in under the conversation scene, electronic equipment determines through being close to light sensor that there is the object to be close to, then judge whether including face key information through the data that leading camera device obtained, if data include face key information, then can deduce that the electronic equipment appears the face in the front, the user is using electronic equipment, it is close to the wrong report that belongs to the disturbed to be close to there being the object to be close to be close to light sensor detection, control electronic equipment does not get into the mistake proofing mode of touching, avoid the unable normal use of electronic equipment.
In one possible embodiment, the electronic device is placed inside a waterproof bag, the surface of which is not completely transparent.
In this possible embodiment, the electronic device is placed in the waterproof bag, the surface of the waterproof bag is not completely transparent, and when the electronic device is in a display screen lock interface or when the electronic device is in a conversation scene, the proximity optical sensor may mistakenly take the incompletely transparent waterproof bag as an object to be approached, so as to obtain a detection result that the object is approached. If the user uses electronic equipment, electronic equipment can include the key information of people's face through the data that leading camera device acquireed, and under this condition, electronic equipment control does not get into the mode of preventing mistake and touching, can avoid waterproof bag to the interference of passing through optical sensor and lead to electronic equipment mistake to go into the mode of preventing mistake and touching, and then influence the problem that electronic equipment used.
In one possible embodiment, acquiring data by a front camera includes: and acquiring image data through the front camera.
In one possible embodiment, acquiring data by a front camera includes: and acquiring depth data through a front camera.
In this possible embodiment, the front-facing camera acquires depth data without relying on bright ambient light. Therefore, when the brightness of the environment where the electronic equipment is located is low, the electronic equipment can also accurately identify whether the key information of the human face is included by using the depth data acquired by the front-facing camera device.
In one possible embodiment, before acquiring the image data by the front camera, the method further includes: determining that the ambient light brightness of the environment in which the electronic device is located is greater than a threshold value.
In this possible embodiment, since the image captured by the front-end camera device in the dark environment cannot effectively identify whether the image includes the key information of the human face, it is not practical to start the front-end camera device to capture the image when the ambient light brightness is not greater than the threshold, and power consumption is increased. Based on this, only when the ambient light brightness reaches a certain threshold value, the front-end camera device is started to collect images.
In one possible embodiment, the method for controlling the operation mode of the electronic device further includes: and determining that the ambient light brightness is not greater than the threshold value, and controlling the electronic equipment to enter a false touch prevention mode.
In one possible embodiment, the method for controlling the electronic device not to enter the anti-false touch mode includes: the processor for controlling the electronic equipment does not report a proximity event to the upper layer application, and the proximity event is generated when the proximity optical sensor detects that the object is within a set range from the electronic equipment.
In one possible embodiment, the method for controlling the operation mode of the electronic device further includes: and determining that the data does not comprise the key information of the face, and controlling the electronic equipment to enter a false touch prevention mode.
In one possible embodiment, the method for controlling the operation mode of the electronic device further includes: and determining that the distance from the object detected by the proximity light sensor to the electronic equipment is within a set range, the electronic equipment is in a conversation state, and the posture of the electronic equipment is not the use posture, and controlling the electronic equipment to enter a false touch prevention mode.
In one possible embodiment, controlling the electronic device to enter the anti-false touch mode comprises: and the processor for controlling the electronic equipment reports a proximity event to the upper-layer application, and the proximity event is generated when the proximity optical sensor detects that the object is within a set range from the electronic equipment.
In one possible embodiment, the method for controlling the operation mode of the electronic device further includes: the method comprises the steps of determining that the proximity optical sensor detects that the object is within a set range from the electronic device and the electronic device is not in a preset state, controlling a processor of the electronic device to report a proximity event to an upper layer application, wherein the proximity event is generated when the proximity optical sensor detects that the object is within the set range from the electronic device.
In one possible embodiment, determining that the data includes face key information includes: and calling the face recognition model to process data to obtain a processing result, wherein the processing result indicating data comprises face key information.
In a possible implementation manner, the detecting manner of whether the electronic device is in a call state includes: monitoring an originating call flow or a receiving call flow of a call application of the electronic equipment to determine whether the electronic equipment is in a call state.
In one possible implementation, the detecting method of whether the gesture of the electronic device is the use gesture comprises the following steps: calculating a pitch angle and a roll angle of the electronic equipment by using a detection value of an acceleration sensor of the electronic equipment; determining whether the pitch angle and the roll angle of the electronic device continuously meet a preset time duration of an attitude threshold range, wherein the attitude threshold comprises: a pitch angle range corresponding to the use attitude, and a roll angle range corresponding to the use attitude.
In one possible implementation, the detecting mode of whether the electronic device is in the screen locking interface includes: and monitoring a screen locking process of a screen locking application of the electronic equipment to determine whether the electronic equipment is in a screen locking interface.
In a second aspect, the present application provides an electronic device comprising: one or more processors, a memory, a front-facing camera, a display screen, and a proximity light sensor; a memory, a front-facing camera, a proximity light sensor, a display screen coupled to one or more processors, the memory for storing computer program code comprising computer instructions which, when executed by the one or more processors, cause the electronic device to perform a method of controlling an operational mode of the electronic device as in any one of the first aspect.
In a third aspect, the present application provides a computer-readable storage medium for storing a computer program, which when executed is specifically configured to implement the control method for the operation mode of the electronic device according to any one of the first aspect.
In a fourth aspect, the present application provides a computer program product containing instructions. When the computer program product runs on a computer or a processor, the computer or the processor is caused to execute the method for controlling an operation mode of an electronic device according to any one of the first aspect.
Drawings
Fig. 1 is a display diagram of an application scenario of an electronic device provided in the present application;
fig. 2a is a hardware structure diagram of an electronic device according to an embodiment of the present disclosure;
fig. 2b is a software architecture diagram of an electronic device provided in an embodiment of the present application;
fig. 3 is a diagram illustrating a call scenario provided in an embodiment of the present application;
fig. 4 is a timing diagram of a control method for an operation mode of an electronic device according to an embodiment of the present application;
fig. 5 is a timing chart of a control method for an operation mode of an electronic device according to a second embodiment of the present application;
fig. 6 is a timing chart of a control method for an operation mode of an electronic device according to a third embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application. The terminology used in the following examples is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in the specification of this application and the appended claims, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, such as "one or more", unless the context clearly indicates otherwise. It should also be understood that in the embodiments of the present application, "one or more" means one, two, or more than two; "and/or" describes the association relationship of the associated objects, indicating that three relationships may exist; for example, a and/or B, may represent: a alone, both A and B, and B alone, where A, B may be singular or plural. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
Reference throughout this specification to "one embodiment" or "some embodiments," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather "one or more but not all embodiments" unless specifically stated otherwise. The terms "comprising," "including," "having," and variations thereof mean "including, but not limited to," unless expressly specified otherwise.
The embodiments of the present application relate to a plurality of numbers greater than or equal to two. It should be noted that, in the description of the embodiments of the present application, the terms "first", "second", and the like are used for distinguishing the description, and are not to be construed as indicating or implying relative importance or order.
In order to clarify the technical solution of the present application more clearly, the following explains the related concepts related to the present application.
1) The screen locking interface, also called a bright screen locking interface, refers to a screen locking graphical user interface displayed on a display screen of the electronic device, and the user can enter a main interface of the electronic device only after performing specific operation on the graphical user interface, so that data security in the electronic device can be effectively protected. The specific operation may be at least one of the following operations: after the user slides on the touch screen, the user inputs a password, or the user performs fingerprint identification, or the user performs face identification, or the user performs iris identification, and the like.
The lock screen interface may include a plurality of interface elements, such as lock screen wallpaper, fingerprint unlock identification, a prompt box, an icon for quick launch of the camera, time and date, and the like.
2) The smart sensor hub (sensor hub) provides a solution based on a combination of software and hardware on top of a low power MCU and a lightweight RTOS operating system, whose main function is to connect and process data from various sensor devices.
Currently, an electronic device is provided with an access light sensor for detecting whether an object is approaching the electronic device. In one example, when the electronic device is in a screen lock display interface or a call state, the electronic device may determine whether to enter a false touch prevention mode according to a detection result obtained by the proximity light sensor. If the detection result of the proximity optical sensor is that an object is close to the proximity optical sensor, the electronic device enters a false touch prevention mode, and if the display screen is controlled to be turned off, the situation that the screen is touched by a user carelessly and the false operation is carried out on the screen can be avoided.
Fig. 1 shows an application scenario of an electronic device. In the application scenario shown in fig. 1, to avoid the damage of water to the electronic device, the electronic device is placed in a waterproof bag for use. However, after the waterproof bag is used for a long time, the surface of the waterproof bag is worn and becomes incompletely transparent, so that the normal use of the electronic equipment is influenced.
As shown in fig. 1 (a) and 1 (c), the electronic device is in a lock screen interface or a call state. Because the surface of the waterproof bag is abraded and becomes incompletely transparent, the approaching optical sensor of the electronic equipment may misjudge the waterproof bag as the approaching of the object, a detection result that the object approaches is obtained, and the electronic equipment enters a mistouch prevention mode. Fig. 1 (b) shows an example of the electronic device entering the false touch prevention mode, in which the electronic device enters the false touch prevention mode and the display screen is turned off.
Of course, in other application scenarios of the electronic device, there may be a problem that the proximity light sensor of the electronic device is interfered to obtain an inaccurate detection result. Furthermore, the electronic equipment can mistakenly enter the anti-false-touch mode when a screen locking interface or a call state is displayed due to the fact that the electronic equipment is close to an inaccurate detection result of the optical sensor, and normal use is affected.
Based on the problem, the embodiment of the application provides a control method for an operation mode of an electronic device. The method for controlling the operation mode of the electronic device provided by the embodiment of the application can be applied to electronic devices such as mobile phones, tablet computers, desktop computers, laptop computers, notebook computers, Ultra-mobile Personal computers (UMPC), handheld computers, netbooks, Personal Digital Assistants (PDA), wearable electronic devices, smart watches and the like.
Fig. 2a is a composition example of an electronic device provided in an embodiment of the present application. Taking a mobile phone as an example, as shown in fig. 2a, the electronic device 100 may include a processor 110, an internal memory 120, a camera 130, a TOF camera 140A, a structured light camera 140B, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, a display screen 170, a sensor module 180, and the like. The sensor module 180 may include an acceleration sensor 180A, a gyro sensor 180B, a proximity light sensor 180C, an ambient light sensor 180D, and the like.
It is to be understood that the illustrated structure of the present embodiment does not constitute a specific limitation to the electronic apparatus 100. In other embodiments, electronic device 100 may include more or fewer components than shown, or combine certain components, or split certain components, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Processor 110 may include one or more processing units, such as: the processor 110 may include an Application Processor (AP), a modem processor, a Graphics Processing Unit (GPU), an Image Signal Processor (ISP), a controller, a video codec, a Digital Signal Processor (DSP), a baseband processor, a smart sensor hub (sensor hub), and/or a neural-Network Processing Unit (NPU), etc. The different processing units may be separate devices or may be integrated into one or more processors.
A memory may also be provided in processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that have just been used or recycled by the processor 110. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Avoiding repeated accesses reduces the latency of the processor 110, thereby increasing the efficiency of the system.
Internal memory 120 may be used to store computer-executable program code, which includes instructions. The processor 110 executes various functional applications of the electronic device 100 and data processing by executing instructions stored in the internal memory 120. The internal memory 120 may include a program storage area and a data storage area. The storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required by at least one function, and the like. The storage data area may store data (such as audio data, phone book, etc.) created during use of the electronic device 100, and the like. In addition, the internal memory 120 may include a high-speed random access memory, and may further include a nonvolatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (UFS), and the like. The processor 110 executes various functional applications of the electronic device 100 and data processing by executing instructions stored in the internal memory 120 and/or instructions stored in a memory provided in the processor.
The electronic device 100 may implement a shooting function through the ISP, the camera 130, the video codec, the GPU, the display screen 170, the application processor, and the like.
The ISP is used to process the data fed back by the camera 130. For example, when a photo is taken, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing and converting into an image visible to naked eyes. The ISP can also carry out algorithm optimization on the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be located in camera 130.
The camera 130 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image to the photosensitive element. The photosensitive element may be a Charge Coupled Device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The light sensing element converts the optical signal into an electrical signal, which is then passed to the ISP where it is converted into a digital image signal. And the ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into image signal in standard RGB, YUV and other formats. In some embodiments, electronic device 100 may include 1 or N cameras 130, N being a positive integer greater than 1.
In some embodiments, the camera 130 may be configured as a front camera of the electronic device for capturing images of a human face located in front of a display screen of the electronic device.
The TOF camera 140A is used to acquire TOF data, which may also be referred to as depth data. In some embodiments, the TOF camera is arranged as a front-facing camera of the electronic device for acquiring TOF data in front of a display screen of the electronic device. For example, TOF data of a human face located in front of a display screen of an electronic device is acquired.
In some embodiments, a TOF camera includes a TOF sensor, a TOF sensor controller, a TOF light source, and a TOF light source controller.
And the TOF light source controller is controlled by the TOF sensor controller to realize control of the TOF light source. The TOF light source emits Infrared (IR) light under control of the TOF light source controller. TOF sensors are used to sense light reflected from Infrared (IR) light off an object, such as a human face, to acquire TOF data.
The structured light camera 140B is used to acquire structured light images, which may also be referred to as depth data. In some embodiments, the structured light camera is configured as a front-facing camera of the electronic device for acquiring depth data in front of a display screen of the electronic device. For example, depth data of a human face located in front of a display screen of an electronic device is acquired.
In some embodiments, the structured light camera comprises: the structure light source is used for emitting structure light beams outwards; the sensing device is used for receiving the structured light modulated and reflected by a person or an object and generating a structured light image. A structured light beam is understood to be a beam of light having a certain structural feature, such as an infrared laser.
The wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the electronic device 100 may be used to cover a single or multiple communication bands. Different antennas can also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed as a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 150 may provide a solution including 2G/3G/4G/5G wireless communication applied to the electronic device 100. The mobile communication module 150 may include at least one filter, a switch, a power amplifier, a Low Noise Amplifier (LNA), and the like. The mobile communication module 150 may receive the electromagnetic wave from the antenna 1, filter, amplify, etc. the received electromagnetic wave, and transmit the electromagnetic wave to the modem processor for demodulation. The mobile communication module 150 may also amplify the signal modulated by the modem processor, and convert the signal into electromagnetic wave through the antenna 1 to radiate the electromagnetic wave. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the processor 110. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the same device as at least some of the modules of the processor 110.
The wireless communication module 160 may provide a solution for wireless communication applied to the electronic device 100, including Wireless Local Area Networks (WLANs) (e.g., wireless fidelity (Wi-Fi) networks), bluetooth (bluetooth, BT), Global Navigation Satellite System (GNSS), Frequency Modulation (FM), Near Field Communication (NFC), Infrared (IR), and the like. The wireless communication module 160 may be one or more devices integrating at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via the antenna 2, performs frequency modulation and filtering processing on electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 160 may also receive a signal to be transmitted from the processor 110, perform frequency modulation and amplification on the signal, and convert the signal into electromagnetic waves through the antenna 2 to radiate the electromagnetic waves.
In the sensor module 180, the acceleration sensor 180A may detect the magnitude of acceleration of the electronic device 100 in various directions (generally, three axes). The magnitude and direction of gravity can be detected when the electronic device 100 is stationary. In some embodiments, the acceleration sensor 180A may also be used to identify the posture of the electronic device, and be applied to horizontal and vertical screen switching, pedometer, and anti-false touch.
The gyro sensor 180B may be used to determine the motion attitude of the electronic device 100. In some embodiments, the angular velocity of electronic device 100 about three axes (i.e., the x, y, and z axes) may be determined by gyroscope sensor 180B. The gyro sensor 180B may be used for photographing anti-shake. For example, when the shutter is pressed, the gyro sensor 180B detects a shake angle of the electronic device 100, calculates a distance to be compensated for by the lens module according to the shake angle, and allows the lens to counteract the shake of the electronic device 100 through a reverse movement, thereby achieving anti-shake. The gyroscope sensor 180B may also be used for navigation, somatosensory gaming scenes.
In some embodiments, the detection data of the acceleration sensor 180A and the gyro sensor 180B may also be used to determine the attitude of the electronic device.
The proximity light sensor 180C may include, for example, a Light Emitting Diode (LED), which may be an infrared light emitting diode, and a light detector. The electronic device 100 emits infrared light to the outside through the light emitting diode. The electronic device 100 detects infrared reflected light from nearby objects using a photodiode. When sufficient reflected light is detected, it can be determined that there is an object near the electronic device 100. When insufficient reflected light is detected, the electronic device 100 may determine that there are no objects near the electronic device 100.
In some embodiments, as described above, the electronic device may determine whether to enter the anti-false touch mode according to the detection result obtained by the proximity light sensor 180C.
The ambient light sensor 180D is used to sense the ambient light level. The electronic device may adaptively adjust the brightness of the display screen 170 based on the perceived ambient light level. The ambient light sensor 180D may also be used to automatically adjust the white balance when taking a picture.
In addition, an operating system runs on the above components. Such as an iOS operating system, an Android operating system, a Windows operating system, etc. A running application may be installed on the operating system.
Fig. 2b is a block diagram of a software structure of the electronic device according to the embodiment of the present application. The layered architecture divides the software into several layers, each layer having a clear role and division of labor. The layers communicate with each other through a software interface. In some embodiments, the Android system is divided into four layers, an application layer, an application framework layer, an Android runtime (Android runtime) and system library, and a kernel layer from top to bottom.
The application layer may include a series of application packages. As shown in FIG. 2b, the application package may include camera, gallery, calendar, phone call, map, navigation, anti-false touch, lock screen, and off screen Display (AOD) applications.
In some embodiments, applications such as false touch prevention, call, screen lock, AOD, and third-party applications such as WeChat voice may initiate a registration procedure with a processor of the electronic device to start the proximity light sensor, and the proximity light sensor detects whether an object is approaching. Wherein a registration procedure is initiated with a processor of the electronic device to start an application program of the proximity light sensor, referred to as upper layer application in the following.
The application framework layer provides an Application Programming Interface (API) and a programming framework for the application program of the application layer. The application framework layer includes a number of predefined functions. As shown in FIG. 2b, the application framework layer may include a window manager, a content provider, a phone manager, a resource manager, a notification manager, a view system, and the like.
The window manager is used for managing window programs. The window manager can obtain the size of the display screen, judge whether a status bar exists, lock the screen, intercept the screen and the like.
The content provider is used to store and retrieve data and make it accessible to applications. The data may include video, images, audio, calls made and received, browsing history and bookmarks, phone books, etc.
The phone manager is used to provide communication functions of the electronic device. Such as management of call status (including on, off, etc.).
The resource manager provides various resources for the application, such as localized strings, icons, pictures, layout files, video files, and the like.
The notification manager enables the application to display notification information in the status bar, can be used to convey notification-type messages, can disappear automatically after a short dwell, and does not require user interaction. Such as a notification manager used to inform download completion, message alerts, etc. The notification manager may also be a notification that appears in the form of a chart or scroll bar text at the top status bar of the system, such as a notification of a background running application, or a notification that appears on the screen in the form of a dialog window. For example, prompting text information in the status bar, sounding a prompt tone, vibrating the electronic device, flashing an indicator light, etc.
The view system includes visual controls such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The display interface may be composed of one or more views. For example, the display interface including the short message notification icon may include a view for displaying text and a view for displaying pictures.
The Android Runtime comprises a core library and a virtual machine. The Android runtime is responsible for scheduling and managing an Android system. In some embodiments of the application, the application cold start may run in the Android runtime, and the Android runtime thus obtains the optimized file state parameter of the application, and then the Android runtime may determine whether the optimized file is outdated due to system upgrade through the optimized file state parameter, and return the determination result to the application management and control module.
The core library comprises two parts: one part is a function which needs to be called by java language, and the other part is a core library of android.
The application layer and the application framework layer run in a virtual machine. And executing java files of the application program layer and the application program framework layer into a binary file by the virtual machine. The virtual machine is used for performing the functions of object life cycle management, stack management, thread management, safety and exception management, garbage collection and the like.
The system library may include a plurality of functional modules. For example: surface managers (surface managers), Media Libraries (Media Libraries), three-dimensional graphics processing Libraries (e.g., OpenGL ES), two-dimensional graphics engines (e.g., SGL), and the like.
The surface manager is used to manage the display subsystem and provide fusion of 2D and 3D layers for multiple applications.
The media library supports a variety of commonly used audio, video format playback and recording, and still image files, among others. The media library may support a variety of audio-video encoding formats, such as MPEG2, h.262, MP3, AAC, AMR, JPG, PNG, and the like.
The three-dimensional graphic processing library is used for realizing three-dimensional graphic drawing, image rendering, synthesis, layer processing and the like.
The two-dimensional graphics engine is a drawing engine for 2D drawing.
The kernel layer is a layer between hardware and software. The kernel layer at least comprises a display driver, a camera driver, an audio driver, a sensor driver and the like.
In some embodiments, camera drive may drive camera 130 to operate and may also drive TOF camera 140A and structured light camera 140B to operate.
In some embodiments, the upper layer application initiating a registration process with the processor of the electronic device may pass through the application framework layer, transmit to the kernel layer, and be controlled by the sensor driver to run in proximity to the light sensor 180C.
Of course, the sensor driver may also be used to drive the acceleration sensor 180A, the gyro sensor 180B, and the ambient light sensor 180D to operate.
Although the Android system is taken as an example in the embodiment of the present application for description, the basic principle is also applicable to electronic devices based on operating systems such as iOS and Windows.
Example one
Fig. 3 illustrates a call scenario of the electronic device. In the communication scenario shown in fig. 3, the electronic device may execute the control method of the operation mode of the electronic device, so as to prevent the electronic device from entering the anti-false-touch mode due to being placed in the waterproof bag, and affecting the normal operation of the electronic device.
In this embodiment, the electronic device may be provided with a face recognition model. The face recognition model has a function of predicting whether image data input to the face recognition model contains face key information.
In some embodiments, the face recognition model may employ a Convolutional Neural Network (CNN), a Long-Short Term Memory artificial Neural Network (LSTM), or other basic Network models.
Convolutional neural networks typically include: an input Layer, a convolutional Layer (Convolution Layer), a Pooling Layer (Pooling Layer), a Fully Connected Layer (FC), and an output Layer. In general, the first layer of a convolutional neural network is the input layer and the last layer is the output layer.
A Convolution Layer (Convolution Layer) refers to a neuron Layer for performing Convolution processing on an input signal in a convolutional neural network. In convolutional layers of convolutional neural networks, one neuron may be connected to only a portion of the neighbor neurons. In a convolutional layer, there are usually several characteristic planes, and each characteristic plane may be composed of several neural units arranged in a rectangular shape. The neural units of the same feature plane share weights, where the shared weights are convolution kernels.
Pooling layers (Pooling layers), usually after the layers are packed, result in very large-dimensional features, which are cut into several regions and the maximum or average value is taken to obtain new, smaller-dimensional features.
The Fully-Connected layer combines all local features into a global feature that is used to calculate the score for each final class.
Long-Short Term Memory artificial neural networks (LSTMs) typically include an input layer, an implied layer, and an output layer. Wherein the input layer is composed of at least one input node; when the LSTM network is a unidirectional network, the hidden layer includes only a forward hidden layer, and when the LSTM network is a bidirectional network, the hidden layer includes a forward hidden layer and a backward hidden layer. And each input node is respectively connected with a forward hidden layer node and a backward hidden layer node and is used for respectively outputting input data to the forward hidden layer node and the backward hidden layer node, the hidden nodes in each hidden layer are respectively connected with output nodes and are used for outputting own calculation results to the output nodes, and the output nodes calculate according to the output nodes of the hidden layers and output data.
The face recognition model can be trained in the following way:
and constructing a face recognition original model. The original model of face recognition can select basic network models such as CNN, LSTM and the like.
Obtaining a plurality of training samples, the training samples comprising: the image samples containing the face key information and the image samples not containing the face key information are marked out whether the image samples contain the face key information or not. In some embodiments, the face key information may include at least one of image information indicating contours of eyes, nose, and mouth.
And inputting the training sample into a face recognition original model, and detecting whether the training sample contains face key information by the face recognition original model to obtain a detection result.
And calculating loss values of the detection result and the labeling result of each training sample by using a loss function to obtain the loss value of the model. In some embodiments, the loss value calculation may be performed by using a cross-entropy loss function, a weighted loss function, or the like, or may be performed by using a combination of multiple loss functions.
And judging whether the loss value of the model meets the convergence condition of the model.
In some embodiments, the model convergence condition may be that the loss value of the model is less than or equal to a predetermined loss threshold. That is, the loss value of the model may be compared with a loss threshold, and if the loss value of the model is greater than the loss threshold, it may be determined that the loss value of the model does not meet the model convergence condition, whereas if the loss value of the model is less than or equal to the loss threshold, it may be determined that the loss value of the model meets the model convergence condition.
It should be noted that, a plurality of training samples may calculate the loss value of the corresponding model for each training sample, and in this case, the model loss value of each training sample is only executed when the model loss value of each training sample meets the model convergence condition, otherwise, as long as the model loss value of one training sample does not meet the model convergence condition, the subsequent steps are executed.
And if the loss value of the model accords with the convergence condition of the model, the model training is finished. The trained model can be used in the control method for the operation mode of the electronic device provided in this embodiment to detect whether the image data input to the model contains the face key information.
If the loss value of the model does not accord with the convergence condition of the model, calculating to obtain a parameter updating value of the model according to the loss value of the model, and updating the original face recognition model according to the parameter updating value of the model. And continuously processing the training sample by using the updated model to obtain a detection result, and continuously executing the subsequent process until the loss value of the model meets the convergence condition of the model.
The method for controlling the operation mode of the electronic device provided by the embodiment of the application, referring to fig. 4, includes the steps of:
s401, the upper layer application sends a registration request to the processor, and the registration request is used for requesting registration of the proximity light sensor.
The call application sends a registration request to the processor; requesting registration of an access light sensor may be understood as: the call application requests the processor to drive the access optical sensor to operate and requests to acquire the proximity event reported by the proximity optical sensor.
It should be noted that, as in the foregoing content of the hardware components of the electronic device, the processor may include a plurality of processing units such as a sensor hub and an Application Processor (AP).
In some embodiments, the upload application sends a registration request to the processor, which may be understood as an upper layer application sending a registration request to the sensor hub. In other embodiments, the upload application sends a registration request to the processor, which may be understood as an upper layer application sending a registration request to the AP. Of course, the upload application may also send registration requests to other processing units, such as the CPU.
In the call scenario shown in fig. 3, the upper layer application is a call application. As shown in fig. 3, after the user dials the dial interface of the electronic device and clicks the button for triggering the originating call, the electronic device executes the originating call process. And the conversation application of the electronic equipment responds to the triggering operation of the user and executes the call initiating process. And, the telephony application of the electronic device also sends a registration request to the processor to request registration of the proximity light sensor.
S402, the processor drives the proximity light sensor to operate.
And the processor receives a registration request sent by the upper application and drives the access optical sensor to operate. The processor controls the operation of the proximity light sensor with sensor drive, as embodied in the software framework of the electronic device described above.
The sensor hub receives a registration request sent by an upper layer application and drives the proximity optical sensor to operate.
And S403, the proximity optical sensor detects that the object approaches to obtain a proximity event.
Wherein: the proximity light sensor, such as the hardware components of the electronic device, may detect whether an object is approaching after operation. The proximity light sensor emits infrared light outward and detects infrared reflected light. When the object approaches to the set range of the proximity light sensor, the power of the infrared reflected light detected by the proximity light sensor reaches a certain value, and then the object is determined to be in the set range of the electronic device.
In the application scenario shown in fig. 3, the waterproof bag may shield the infrared light emitted by the proximity light sensor due to the incomplete transparency of the surface, so as to obtain the infrared reflected light. The proximity light sensor receives the infrared reflected light, determines that an object is approaching, and generates a proximity event. In some embodiments, the high and low levels may be generated when the proximity light sensor determines that an object is approaching. In one example, the proximity light sensor determines that an object is approaching and generates a high level.
And S404, the proximity optical sensor reports the proximity event to the processor.
In some embodiments, the sensor hub primary function is to connect and process data from various sensor devices. Thus, the proximity light sensor may report a proximity event to the sensor hub.
In other embodiments, the proximity light sensor may also report the proximity event to other processing units, such as an AP, a CPU, and the like.
S405, the processor judges whether the electronic equipment is in a call state.
If the processor determines that the electronic device is not in the call state, the process goes to step S406. The processor determines that the electronic device is in a call state, and then executes step S408.
In the call scenario shown in fig. 3, the call application executes an originating call flow. The processor may monitor a call flow initiated by the telephony application to determine whether the electronic device is in a telephony state. Of course, in a scenario where the electronic device answers the call, the call application executes a call receiving flow, and the processor may monitor the call receiving flow of the call application to determine whether the electronic device is in a call state. Based on this, the processor may determine whether the electronic device is in a call state by monitoring an originating call flow or a receiving call flow of the call application.
In some embodiments, the telephony application is configured with a flag bit that indicates whether the telephony application is to initiate a call flow or receive a call flow. Therefore, the processor can determine whether the electronic device is in a call state by obtaining the value of the flag bit of the call application.
It should be noted that, when the sensor hub receives the proximity event reported by the proximity light sensor, the sensor hub may monitor an originating call flow or a receiving call flow of the call application to determine whether the electronic device is in a call state.
S406, the processor judges whether the electronic equipment is in a screen locking interface.
If the processor determines that the electronic device is not located in the screen-locking interface, the processor executes step S407 to report the proximity event to the upper-layer application. If the processor determines that the electronic device is in the screen locking interface, the process goes to step S410.
The screen locking application of the electronic equipment runs to execute the screen locking process, and the processor can also monitor the screen locking process executed by the screen locking application to determine whether the electronic equipment is in a screen locking interface.
In some embodiments, the screen locking application is configured with a flag bit for indicating whether the screen locking application performs the screen locking process, and therefore, the processor also reads a value of the flag bit to identify whether the read value indicates whether the screen locking application performs the screen locking process.
It should be noted that the processor in this step may be a sensor hub, and the sensor hub determines whether the electronic device is in the screen locking interface. Specifically, the sensor hub may monitor a screen locking process executed by the screen locking application to determine whether the mobile phone is in the screen locking interface.
And when the sensor hub judges that the electronic equipment is not in the screen locking interface, the sensor hub reports the approach event to the upper layer application. In some embodiments, the sensor hub reports high and low levels to the upper layer application when the proximity event is high and low.
It should be noted that the upper layer application that the processor reports the proximity event is proposed to send a registration request to the processor in step S401 to request to register the application of the proximity optical sensor.
The upper layer application receives the proximity event, and can execute the subsequent flow according to the proximity event. In some embodiments, the upper layer application is a call application and a screen locking application, and the call application and the screen locking application receive the proximity event, and may send an instruction to a processor (e.g., an AP) of the electronic device according to the proximity event, and the processor of the electronic device controls the electronic device to enter the anti-false-touch mode. In other embodiments, the upper layer application is another application, such as a third-party application, and the third-party application receives the proximity event and may perform a subsequent process configured by itself.
It should be further noted that fig. 4 illustrates an example of the execution sequence of step S405 and step S406, and does not limit the execution sequence of step S405 and step S406. In some embodiments, the processor may execute step S405 and step S406 in parallel, or execute step S406 first and then execute step S405.
The processor executes step S405 and step S406 in parallel, and in step S405, the processor determines that the electronic device is not in a call state, and reports the proximity event to the upper layer application.
S408, the processor identifies whether the posture of the electronic equipment is the use posture.
If the processor recognizes that the gesture of the electronic device is the use gesture, the processor executes step S409 to report the proximity event to the upper layer application. The specific process of step S409 can refer to the content of step S407, and is not described herein again.
The processor recognizes that the gesture of the electronic device is not the use gesture, then performs step S410.
In this step, the use posture can be understood as a posture simulating the use of the mobile phone, generally simulating the posture of the handheld electronic device of the user approaching to the ear, and belongs to a posture close to the upright posture. The gestures simulating the approach of the user handheld electronic device to the headset may include a gesture of the left handheld electronic device to the ear and a gesture of the right handheld electronic device to the ear.
In some embodiments, the processor may calculate the pose of the electronic device using the acceleration sensor 180A as described above for the hardware components of the electronic device to determine if the pose of the electronic device is a use pose.
The processor calculates the posture of the electronic device by using the acceleration sensor 180A, and identifies whether the posture of the electronic device is the use posture as follows:
testing the ranges (first threshold values for short) of the pitch angle (pitch) and the roll angle (roll) of the electronic equipment when the left-hand-held electronic equipment is close to the ear to answer the call; and testing the ranges (called second threshold values for short) of the pitch angle (pitch) and roll angle (roll) of the right hand-held electronic device close to the ear to answer the call. And saving the tested first threshold value and the tested second threshold value in the electronic equipment. Wherein the pitch angle (pitch) is an angle of rotation of the electronic device about the X-axis, and the roll angle (roll) is an angle of rotation of the electronic device about the Z-axis.
The processor continuously acquires the detection values of the X-axis, the Y-axis, and the Z-axis of the acceleration sensor 180A, and calculates the pitch angle (pitch) and the roll angle (roll) using the detection values of the X-axis, the Y-axis, and the Z-axis of the acceleration sensor 180A acquired each time.
The processor compares the calculated pitch angle (pitch) and roll angle (roll) with a first threshold and a second threshold, respectively. And if the pitch angle (pitch) and the roll angle (roll) are judged to be continuously kept for a period of time in a first threshold value, such as 3 seconds, or the pitch angle (pitch) and the roll angle (roll) are judged to be continuously kept for a period of time in a second threshold value, such as 3 seconds, determining that the electronic equipment is in the use posture.
In other embodiments, the processor may also calculate the pose of the electronic device using the acceleration sensor 180A and the gyro sensor 180B to determine whether the pose of the electronic device is a use pose.
In this step, the processor calculates the posture of the electronic device using the acceleration sensor 180A and the gyro sensor 180B, and the manner of identifying whether the posture of the electronic device is the use posture is as follows:
as in the above embodiment, the electronic device also stores ranges (first threshold values for short) of a pitch angle (pitch) and a roll angle (roll) of the electronic device when the left-handed electronic device is close to an ear to receive a call; and the range (called the second threshold value for short) of the pitch angle (pitch) and the roll angle (roll) of the right handheld electronic device close to the ear to answer the call.
The acceleration sensor 180A and the gyro sensor 180B perform detection to obtain detection data.
The processor reads the detection data of the gyro sensor 180B and the acceleration sensor 180A. The pitch angle (pitch) and the roll angle (roll) are calculated from the detection data of the acceleration sensor 180A, and the pitch angular velocity and the roll angular velocity are calculated from the detection data of the gyro sensor 180B.
By adopting a Kalman filtering algorithm, the roll angle and roll angular velocity data, the pitch angle data and the pitch angular velocity data are respectively subjected to filtering processing, so that the detection data of the acceleration sensor 180A and the detection data of the gyroscope sensor 180B can be mutually compensated, the measurement noise is reduced, and the pitch angle and the roll angle are more accurate.
The processor compares the calculated pitch angle (pitch) and roll angle (roll) with a first threshold and a second threshold, respectively. And if the pitch angle (pitch) and the roll angle (roll) are judged to be continuously kept for a period of time in a first threshold value, such as 3 seconds, or the pitch angle (pitch) and the roll angle (roll) are judged to be continuously kept for a period of time in a second threshold value, such as 3 seconds, determining that the electronic equipment is in the use posture.
It should be noted that the processor in this step may be a sensor hub, and the sensor hub receives the detection data of the acceleration sensor 180A and the gyro sensor 180B, and calculates the attitude of the electronic device using the detection data.
And reporting the approach event to an upper-layer application when the sensor hub identifies the posture of the electronic equipment as the use posture.
In the call scenario shown in fig. 3, the processor may determine that the electronic device is in a call state, and recognize that the gesture of the electronic device is not a use gesture.
S410, the processor judges whether the ambient light brightness is larger than a threshold value.
As with the hardware components of the electronic device, the ambient light sensor 180D is used to sense the ambient light level to obtain the ambient light level. The processor acquires the ambient light brightness detected by the ambient light sensor 180D and determines whether the ambient light brightness is greater than a threshold.
The processor determines whether the ambient light level is greater than a threshold and determines whether the electronic device is in a brighter environment. The reason why the processor judges whether the environmental light brightness is greater than the threshold value is as follows: because the image that the camera was shot under the dim light environment, whether can't effective discernment includes human face key information, consequently, it has no actual meaning to start the camera operation in order to shoot the image. Based on this, only when the ambient light brightness reaches a certain threshold value, the camera is started to collect images.
Because the parameters of different types of cameras are different, the ambient light brightness required for capturing images of human face contours is also different, and therefore the threshold in this step can be understood as a set parameter, and the setting criteria of the parameter are: the ambient light is high enough to allow the camera to capture an image with a human face contour.
If the processor determines that the ambient light brightness is not greater than the threshold, the processor performs step S411 to report the proximity event to the upper layer application. The process of step S411 can refer to the content of step S407, and is not described herein again.
The processor determines that the ambient light level is greater than the threshold, then step S412 is executed.
In some embodiments, the processor of step S410 is also a sensor hub. The sensor hub acquires the ambient light brightness detected by the ambient light sensor, judges whether the ambient light brightness is greater than a threshold value or not, judges whether the ambient light brightness is not greater than the threshold value, and reports an approach event to an upper application.
In the call scenario shown in fig. 3, if the sensor hub determines that the ambient light brightness is not greater than the threshold, the sensor hub reports the proximity event to the call application.
It should be noted that step S410 may be a step selectively executed. In some embodiments, the processor may not perform step S410, recognize that the electronic device is in the screen lock interface when performing step S406, perform step S412, and recognize that the gesture of the electronic device is not the use gesture when performing step S408, perform step S412.
And S412, the processor acquires image data through the camera.
And the processor determines that the posture of the electronic equipment is not the using posture and the ambient light brightness is greater than a certain threshold value, or determines that the electronic equipment is in a screen locking interface, and then starts the camera to collect images. In this step, the camera refers to a front camera of the electronic device. The processor acquires image data acquired by the front camera. Moreover, in some embodiments, the processor in this step may also be a sensor hub.
In some embodiments, the front-facing camera may take one or more frames of images.
Also, the number of frames of images taken by the front camera also belongs to the parameter that can be set. The number of frames of images shot by the front camera can be set according to the shooting capability of the front camera. In general, the stronger the shooting capability of the front camera, the smaller the number of frames of images shot by the front camera is set. Based on the image shot by the front camera, the number of frames of the image shot by the front camera can be set by determining the image with a clearer outline as a basic criterion.
And S413, the processor determines whether the image data comprises the face key information.
The processor calls the face recognition model provided in the foregoing, inputs the image data obtained in step S412 to the face recognition model, and the face recognition model recognizes whether the image data includes face key information.
In some embodiments, the processor in this step may also be a sensor hub. The sensor hub determines whether the image data includes face key information.
And when the front camera shoots a plurality of frames of images, the sensor hub identifies whether the image data of each frame of image comprises the face key information or not through the face identification model. Specifically, each frame of image obtained by the front camera is input into the face recognition model, and the face recognition model determines whether the image data has face key information.
The processor determines that the image data does not include the face key information, and the processor executes step S414 and reports the proximity event to the upper layer application.
And the sensor hub determines that the image data does not comprise the face key information and reports the approach event to the upper layer application. In the call scenario shown in fig. 3, if the image data captured by the front-facing camera does not include the key information of the human face, the sensor hub reports the approach event to the call application.
When the front-facing camera shoots a plurality of frames of images, the processor determines that each frame of image data does not contain the key information of the human face by using the human face recognition model, and reports the approach event to the upper-layer application.
It should be further noted that, the executable procedure of the uploading application receiving the proximity event may refer to the content of step S407, and is not described herein again.
The processor determines that the image data includes face key information, and the processor executes step S415 to control the processor not to report a proximity event.
The processor determines that the image data comprises face key information, the processor can determine that a face exists in front of a display screen of the electronic equipment, and further deduces that a user refers to screen information of the display screen, and the processor does not report a proximity event to an upper application. It can thus be seen that: the surface of the waterproof bag is abraded and becomes incompletely transparent, and the proximity light sensor wrongly takes the waterproof bag as an object to approach and reports an approach event to the processor. When the electronic equipment is in a call state or a screen locking interface, the processor can deduce that a user refers to screen information of the display screen through image data shot by the front-facing camera so as to control not to report an approach event to an upper-layer application, and the electronic equipment is prevented from entering a false touch prevention mode due to the fact that the electronic equipment detects the approach event because of interference of a waterproof bag on a near light sensor in a call scene or a screen locking interface.
In some embodiments, the processor in this step may also be a sensor hub.
It should be further noted that when the front-facing camera captures a plurality of frames of images, the processor determines that one frame of image data does not contain the key information of the face by using the face recognition model, and does not report an approach event.
Example two
In the method for controlling the operation mode of the electronic device, whether the processor can accurately determine that the image data shot by the camera includes the key information of the human face is limited by the brightness of the environment where the electronic device is located. The brightness of the environment where the electronic equipment is located is low, the brightness of the environment light detected by the environment light sensor is not larger than the threshold value, and the processor cannot determine whether the image data shot by the camera includes the key information of the human face. Therefore, the processor reports the proximity event reported by the proximity light sensor to the upper layer application. What comes with this is that, supposing that the electronic device is placed in the waterproof bag and communicates in a dark environment, the waterproof bag interferes with the proximity light sensor, so that the proximity light sensor detects a proximity event and reports the proximity event to the processor, the processor reports the proximity event to the communication application, and the upper layer application may control the electronic device to enter a false touch prevention mode according to the proximity event, thereby causing the electronic device to fail to work normally.
Based on the above, the embodiment of the application provides another method for controlling the operation mode of the electronic device. In the control method for the operation mode of the electronic device provided by this embodiment, the TOF camera is used to acquire depth data, and therefore does not depend on bright ambient light. Therefore, when the brightness of the environment where the electronic equipment is located is low, the processor can also accurately identify whether the key information of the human face is included by using the depth data acquired by the TOF camera.
The method for controlling the operation mode of the electronic device provided by the embodiment of the application can also be applied to the electronic device provided by the foregoing content. In this embodiment, the electronic device may also be provided with a face recognition model. The face recognition model has a function of predicting whether the depth data input to the face recognition model contains face key information.
Similar to the face recognition model mentioned in the first embodiment, the face recognition model set in the electronic device of the present embodiment may also adopt basic Network models such as a Convolutional Neural Network (CNN), a Long-Short Term Memory artificial Neural Network (LSTM), and the like. The basic structures of the convolutional neural network and the long-short term memory artificial neural network can be as described in the first embodiment, and are not described herein again.
In this embodiment, the face recognition model may be trained in the following manner:
and constructing a face recognition original model. The original model of face recognition can select basic network models such as CNN, LSTM and the like.
Obtaining a plurality of training samples, the training samples comprising: the depth data samples containing the face key information and the depth data samples not containing the face key information are marked out whether the depth data samples contain the face key information or not. And training samples are acquired by the TOF camera. In some embodiments, the face key information may include at least one of depth data indicating contours of eyes, nose, and mouth.
And inputting the training sample into a face recognition original model, and detecting whether the training sample contains face key information by the face recognition original model to obtain a detection result.
And calculating loss values of the detection result and the labeling result of each training sample by using a loss function to obtain the loss value of the model. In some embodiments, the loss value calculation may be performed by using a cross-entropy loss function, a weighted loss function, or the like, or may be performed by using a combination of multiple loss functions.
And judging whether the loss value of the model meets the convergence condition of the model. The model convergence condition may be the same as that of the first embodiment, and is not described herein again.
And if the loss value of the model accords with the convergence condition of the model, the model training is finished. The trained model can be used in the control method for the operation mode of the electronic device provided in this embodiment to detect whether the depth data input to the model contains the face key information.
If the loss value of the model does not accord with the convergence condition of the model, calculating to obtain a parameter updating value of the model according to the loss value of the model, and updating the original face recognition model according to the parameter updating value of the model. And continuously processing the training sample by using the updated model to obtain a detection result, and continuously executing the subsequent process until the loss value of the model meets the convergence condition of the model.
Fig. 5 illustrates a method for controlling an operation mode of an electronic device according to an embodiment of the present application. As shown in fig. 5, the method for controlling an operation mode of an electronic device according to an embodiment of the present application includes:
and S501, the upper layer application sends a registration request to the processor, wherein the registration request is used for requesting to register the proximity light sensor.
And S502, the processor drives the proximity light sensor to operate.
And S503, the approach optical sensor detects that the object approaches to obtain an approach event.
And S504, the proximity light sensor reports the proximity event to the processor.
It should be noted that the specific implementation manner of steps S501 to S504 may be the content of steps S401 to S404 provided in the first embodiment, and details are not repeated here.
S505, the processor judges whether the electronic equipment is in a call state.
If the processor determines that the electronic device is not in the call state, the process goes to step S506. The processor determines that the electronic device is in a call state, and then executes step S508.
S506, the processor judges whether the electronic equipment is in the screen locking interface.
If the processor determines that the electronic device is not located in the screen locking interface, the processor executes step S507 to report the proximity event to the upper layer application. If the processor determines that the electronic device is in the screen locking interface, the process goes to step S510.
The specific implementation manner of step S505 and step S506 may be as the content of step S405 and step S406 provided in the first embodiment, and details are not repeated here.
Fig. 5 shows an example of the execution sequence of step S505 and step S506, and does not limit the execution sequence of step S505 and step S506. In some embodiments, the processor may execute step S505 and step S506 in parallel, or execute step S506 before executing step S505.
S508, the processor identifies whether the gesture of the electronic equipment is the using gesture.
In some embodiments, the processor may calculate the pose of the electronic device using the acceleration sensor 180A to determine whether the pose of the electronic device is a use pose.
In other embodiments, the processor may also calculate the pose of the electronic device using the acceleration sensor 180A and the gyro sensor 180B to determine whether the pose of the electronic device is a use pose.
The manner in which the processor calculates the posture of the electronic device by using the acceleration sensor 180A or by using the acceleration sensor 180A and the gyroscope sensor 180B can be referred to in step S408 of the first embodiment, and details thereof are not repeated here.
If the processor recognizes that the gesture of the electronic device is the use gesture, the processor executes step S509 to report the proximity event to the upper layer application. The processor recognizes that the gesture of the electronic device is not the use gesture, then performs step S510.
S510, the processor acquires depth data through the TOF camera.
The processor identifies that the electronic equipment is in a conversation state, the gesture of the electronic equipment is not a use gesture, or judges that the electronic equipment is in a screen locking interface, and starts the TOF camera to collect depth data. Of course, TOF camera generally refers to a front-facing TOF camera of an electronic device. The processor in this step may be a sensor hub.
The specific implementation of the front TOF camera for acquiring the depth data may be as described in the foregoing content of the hardware component of the electronic device, and will not be described herein again.
In some embodiments, the front TOF camera may capture one or more depth data, and the multiple depth data may be acquired by the front TOF camera one time or multiple times.
The number of front TOF cameras acquiring depth data belongs to a parameter that can be set. The number of the front TOF cameras collecting the depth data can be set according to the capacity of the front TOF cameras. In general, the more powerful the front TOF camera, the smaller the number of depth data acquired by the front TOF camera is set.
And S511, the processor determines whether the depth data contains the key information of the human face.
The processor calls the face recognition model provided in the foregoing, inputs the depth data acquired in step S510 into the face recognition model, and the face recognition model recognizes whether the depth data includes face key information.
In some embodiments, the processor in this step may also be a sensor hub. The sensor hub determines whether the depth data includes face key information.
And when the front TOF camera acquires a plurality of depth data, the sensor hub identifies whether each depth data comprises face key information through the face identification model. Specifically, each depth data acquired by the front TOF camera is input into the face recognition model, and whether each depth data has face key information or not is determined by the face recognition model.
The processor determines that the depth data does not include the face key information, and the processor executes step S512 to report the proximity event to the upper layer application.
And the sensor hub determines that the depth data does not comprise the key information of the human face, and reports the approach event to the upper layer application.
The processor determines that the depth data includes face key information, and the processor executes step S513 to control not to report the proximity event.
It should be noted that, the processor determines that the depth data includes key information of a human face, the processor may determine that a human face exists in front of a display screen of the electronic device, and further infer that a user refers to screen information of the display screen, and the processor does not report a proximity event to an upper application. It can thus be seen that: the surface of the waterproof bag is abraded and becomes incompletely transparent, and the proximity light sensor wrongly takes the waterproof bag as an object to approach and reports an approach event to the processor. When the electronic equipment is in a call state or a screen locking interface, the processor can deduce that a user looks up screen information of a display screen through depth data collected by the front TOF camera so as to control not to report an approach event to an upper application, and the electronic equipment is prevented from entering a wrong touch prevention mode due to the fact that the electronic equipment detects the approach event because of interference of a waterproof bag on a near light sensor in a call scene or a screen locking interface.
EXAMPLE III
The embodiment of the application provides another control method for the operation mode of electronic equipment. In the method for controlling the operation mode of the electronic device provided by this embodiment, the structured light camera is used to collect depth data, and the structured light camera collects depth data without depending on bright ambient light. Therefore, when the brightness of the environment where the electronic equipment is located is low, the processor can also accurately identify whether the key information of the human face is included by using the depth data collected by the structured light camera.
The method for controlling the operation mode of the electronic device provided by the embodiment of the application can also be applied to the electronic device provided by the foregoing content. In this embodiment, the electronic device may also be provided with a face recognition model. The face recognition model has a function of predicting whether the depth data input to the face recognition model contains face key information.
Similar to the face recognition model mentioned in the first embodiment, the face recognition model set in the electronic device of the present embodiment may also adopt basic Network models such as a Convolutional Neural Network (CNN), a Long-Short Term Memory artificial Neural Network (LSTM), and the like. The basic structures of the convolutional neural network and the long-short term memory artificial neural network can be as described in the first embodiment, and are not described herein again.
In this embodiment, the face recognition model may be trained in the following manner:
and constructing a face recognition original model. The original model of face recognition can select basic network models such as CNN, LSTM and the like.
Obtaining a plurality of training samples, the training samples comprising: the depth data samples containing the face key information and the depth data samples not containing the face key information are marked out whether the depth data samples contain the face key information or not by the training samples. And the training sample is acquired by a structured light camera. In some embodiments, the face key information may include at least one of depth data indicating contours of eyes, nose, and mouth.
And inputting the training sample into a face recognition original model, and detecting whether the training sample contains face key information by the face recognition original model to obtain a detection result.
And calculating loss values of the detection result and the labeling result of each training sample by using a loss function to obtain the loss value of the model. In some embodiments, the loss value calculation may be performed by using a cross-entropy loss function, a weighted loss function, or the like, or may be performed by using a combination of multiple loss functions.
And judging whether the loss value of the model meets the convergence condition of the model. The model convergence condition may be the same as that of the first embodiment, and is not described herein again.
And if the loss value of the model accords with the convergence condition of the model, the model training is finished. The trained model can be used in the control method for the operation mode of the electronic device provided in this embodiment to detect whether the depth data input to the model contains the face key information.
If the loss value of the model does not accord with the convergence condition of the model, calculating to obtain a parameter updating value of the model according to the loss value of the model, and updating the original face recognition model according to the parameter updating value of the model. And continuously processing the training sample by using the updated model to obtain a detection result, and continuously executing the subsequent process until the loss value of the model meets the convergence condition of the model.
Fig. 6 shows a method for controlling an operation mode of an electronic device according to an embodiment of the present application. As shown in fig. 6, the method for controlling an operation mode of an electronic device according to an embodiment of the present application includes:
s601, the upper layer application sends a registration request to the processor, where the registration request is used to request registration of the proximity light sensor.
And S602, the processor drives the proximity light sensor to operate.
And S603, detecting that the object approaches by the proximity optical sensor to obtain a proximity event.
And S604, the proximity optical sensor reports the proximity event to the processor.
It should be noted that the specific implementation manner of steps S601 to S604 may be as the content of steps S401 to S404 provided in the first embodiment, and details are not repeated here.
S605, the processor judges whether the electronic equipment is in a call state.
If the processor determines that the electronic device is not in the call state, the process goes to step S606. The processor determines that the electronic device is in the call state, and then executes step S608.
S606, the processor judges whether the electronic equipment is in the screen locking interface.
If the processor determines that the electronic device is not in the screen lock interface, the processor executes step S607 to report the proximity event to the upper application. If the processor determines that the electronic device is in the screen locking interface, step S610 is executed.
The specific implementation manner of step S605 and step S606 may be as the content of step S405 and step S406 provided in the first embodiment, and details are not repeated here.
Fig. 6 shows an example of the execution sequence of step S605 and step S606, and does not limit the execution sequence of step S605 and step S606. In some embodiments, the processor may execute step S606 and step S605 in parallel, or execute step S606 before step S605.
And S608, the processor identifies whether the gesture of the electronic equipment is a use gesture.
In some embodiments, the processor may calculate the pose of the electronic device using the acceleration sensor 180A to determine whether the pose of the electronic device is a use pose.
In other embodiments, the processor may also calculate the pose of the electronic device using the acceleration sensor 180A and the gyro sensor 180B to determine whether the pose of the electronic device is a use pose.
The manner in which the processor calculates the posture of the electronic device by using the acceleration sensor 180A or by using the acceleration sensor 180A and the gyroscope sensor 180B can be referred to in step S408 of the first embodiment, and details thereof are not repeated here.
If the processor recognizes that the gesture of the electronic device is the use gesture, the processor executes step S609 and reports the proximity event to the upper application. The processor recognizes that the gesture of the electronic device is not the use gesture, then performs step S610.
S610, the processor acquires depth data through the structured light camera.
The processor identifies that the electronic equipment is in a conversation state and the posture of the electronic equipment is not a use posture, or judges that the electronic equipment is in a screen locking interface, and starts the structured light camera to collect depth data. Of course, a structured light camera generally refers to a front-facing structured light camera of an electronic device. The processor in this step may be a sensor hub.
The specific implementation of the front-mounted structured light camera for acquiring the depth data may be as described in the foregoing content of the hardware component of the electronic device, and will not be described herein again.
In some embodiments, the front structured light camera may capture one or more depth data, and the plurality of depth data may be acquired by the front structured light camera at one time or multiple times.
The number of front-mounted structured light cameras collecting depth data belongs to a parameter that can be set. The number of the front structured light camera to acquire the depth data can be set according to the capability of the front structured light camera. In general, the higher the capability of the front-facing structured light camera, the smaller the number of depth data collected by the front-facing structured light camera is set.
S611, the processor determines whether the depth data contains the key information of the human face.
The processor calls the face recognition model provided in the foregoing, inputs the depth data acquired in step S610 to the face recognition model, and the face recognition model recognizes whether the depth data includes face key information.
In some embodiments, the processor in this step may also be a sensor hub. The sensor hub determines whether the depth data includes face key information.
And when the front structured light camera acquires a plurality of depth data, the sensor hub identifies whether each depth data comprises face key information or not through the face identification model. Specifically, each depth data collected by the front structured light camera is input into the face recognition model, and whether each depth data has face key information or not is determined by the face recognition model.
The processor determines that the depth data does not include the face key information, and the processor executes step S612 and reports the proximity event to the upper layer application.
And the sensor hub determines that the depth data does not comprise the key information of the human face, and reports the approach event to the upper layer application.
The processor determines that the depth data includes face key information, and the processor executes step S613 to control not to report the proximity event.
It should be noted that, the processor determines that the depth data includes key information of a human face, the processor may determine that a human face exists in front of a display screen of the electronic device, and further infer that a user refers to screen information of the display screen, and the processor does not report a proximity event to an upper application. It can thus be seen that: the surface of the waterproof bag is abraded and becomes incompletely transparent, and the proximity light sensor wrongly takes the waterproof bag as an object to approach and reports an approach event to the processor. When the electronic equipment is in a call state or a screen locking interface, the processor can deduce that a user looks up screen information of the display screen through depth data collected by the front-mounted structured light camera so as to control not to report an approach event to an upper-layer application, and the electronic equipment is prevented from entering a wrong touch prevention mode due to the fact that the electronic equipment detects the approach event due to interference of a waterproof bag on a near light sensor in a call scene or the screen locking interface.
Another embodiment of the present application also provides a computer-readable storage medium having stored therein instructions, which when run on a computer or processor, cause the computer or processor to perform one or more steps of any of the methods described above.
The computer readable storage medium may be a non-transitory computer readable storage medium, for example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
Another embodiment of the present application also provides a computer program product containing instructions. The computer program product, when run on a computer or processor, causes the computer or processor to perform one or more steps of any of the methods described above.

Claims (17)

1. A method for controlling an operation mode of an electronic device is applied to the electronic device, the electronic device comprises a proximity optical sensor, a front camera device and a display screen, and the method for controlling the operation mode of the electronic device comprises the following steps:
detecting an object by the proximity light sensor;
determining that the object detected by the proximity light sensor is within a set range from the electronic equipment and the electronic equipment is in a preset state; wherein the preset state comprises: the electronic equipment is in a call state, and the posture of the electronic equipment is a use posture, or a display screen of the electronic equipment presents a screen locking interface;
acquiring data through the front camera device;
and determining that the data comprises face key information, and controlling the electronic equipment not to enter a false touch prevention mode.
2. The method of claim 1, wherein the electronic device is placed within a waterproof bag, wherein a surface of the waterproof bag is not completely transparent.
3. The method according to claim 1 or 2, wherein the acquiring data by the front camera comprises:
and acquiring image data through the front camera device.
4. The method according to claim 1 or 2, wherein the acquiring data by the front camera comprises:
and acquiring depth data through the front camera device.
5. The method of claim 3, wherein prior to acquiring image data by the front-facing camera, further comprising:
determining that the ambient light brightness of the environment where the electronic device is located is greater than a threshold.
6. The method of claim 5, further comprising:
and determining that the ambient light brightness is not greater than a threshold value, and controlling the electronic equipment to enter a false touch prevention mode.
7. The method of claim 1, wherein the controlling the electronic device not to enter a false touch prevention mode comprises:
and controlling the processor of the electronic equipment not to report a proximity event to an upper layer application, wherein the proximity event is generated when an object detected by the proximity light sensor is within a set range from the electronic equipment.
8. The method of claim 1, further comprising:
and determining that the data does not comprise face key information, and controlling the electronic equipment to enter a false touch prevention mode.
9. The method of claim 1, further comprising:
and determining that the distance between the object detected by the proximity light sensor and the electronic equipment is within a set range, the electronic equipment is in a conversation state, the posture of the electronic equipment is not a use posture, and controlling the electronic equipment to enter a false touch prevention mode.
10. The method of claim 6, 8 or 9, wherein the controlling the electronic device to enter a false touch prevention mode comprises:
and controlling a processor of the electronic equipment to report a proximity event to an upper layer application, wherein the proximity event is generated when an object detected by the proximity light sensor is within a set range from the electronic equipment.
11. The method of claim 1, further comprising:
determining that the object detected by the proximity light sensor is within a set range from the electronic device and the electronic device is not in the preset state, and controlling a processor of the electronic device to report a proximity event to an upper layer application, wherein the proximity event is generated when the object detected by the proximity light sensor is within the set range from the electronic device.
12. The method of claim 1, wherein the determining that the data includes face key information comprises:
and calling a face recognition model to process the data to obtain a processing result, wherein the processing result indicates that the data comprises face key information.
13. The method according to claim 1, wherein the detecting whether the electronic device is in a call state comprises:
monitoring an initiating call flow or a receiving call flow of a call application of the electronic equipment to determine whether the electronic equipment is in a call state.
14. The method according to claim 1, wherein the detecting whether the gesture of the electronic device is a use gesture comprises:
calculating a pitch angle and a roll angle of the electronic equipment by using a detection value of an acceleration sensor of the electronic equipment;
determining whether the pitch angle and the roll angle of the electronic device continuously satisfy an attitude threshold range for a preset duration, the attitude threshold including: a pitch angle range corresponding to the use attitude, and a roll angle range corresponding to the use attitude.
15. The method of claim 1, wherein the detecting whether the electronic device is in the screen locking interface comprises:
and monitoring a screen locking process of a screen locking application of the electronic equipment to determine whether the electronic equipment is in a screen locking interface.
16. An electronic device, comprising:
one or more processors, a memory, a front-facing camera, a display screen, and a proximity light sensor;
the memory, the front-facing camera, the proximity light sensor, the display screen coupled to the one or more processors, the memory for storing computer program code comprising computer instructions which, when executed by the one or more processors, cause the electronic device to perform the method of controlling the operational mode of the electronic device of any of claims 1-15.
17. A computer-readable storage medium for storing a computer program, which, when executed, is particularly adapted to implement the method of controlling an operational mode of an electronic device according to any one of claims 1 to 15.
CN202210025952.3A 2022-01-11 2022-01-11 Control method of electronic equipment operation mode, electronic equipment and readable storage medium Active CN114125148B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210025952.3A CN114125148B (en) 2022-01-11 2022-01-11 Control method of electronic equipment operation mode, electronic equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210025952.3A CN114125148B (en) 2022-01-11 2022-01-11 Control method of electronic equipment operation mode, electronic equipment and readable storage medium

Publications (2)

Publication Number Publication Date
CN114125148A true CN114125148A (en) 2022-03-01
CN114125148B CN114125148B (en) 2022-06-24

Family

ID=80363944

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210025952.3A Active CN114125148B (en) 2022-01-11 2022-01-11 Control method of electronic equipment operation mode, electronic equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN114125148B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116708661A (en) * 2022-10-26 2023-09-05 荣耀终端有限公司 Audio call processing method and related electronic equipment

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106201296A (en) * 2015-04-30 2016-12-07 小米科技有限责任公司 Realize the method and device of false-touch prevention
CN106303021A (en) * 2016-08-12 2017-01-04 广东欧珀移动通信有限公司 Screen work condition control method and device
EP3115868A1 (en) * 2010-09-08 2017-01-11 Apple Inc. Camera-based orientation fix from portrait to landscape
US20170041455A1 (en) * 2015-08-06 2017-02-09 Lg Electronics Inc. Mobile terminal and method for controlling the same
CN108418953A (en) * 2018-02-05 2018-08-17 广东欧珀移动通信有限公司 The screen control method and device of terminal, readable storage medium storing program for executing, terminal
CN108919952A (en) * 2018-06-28 2018-11-30 郑州云海信息技术有限公司 A kind of control method, device, equipment and the storage medium of intelligent terminal screen
CN109195213A (en) * 2018-11-26 2019-01-11 努比亚技术有限公司 Mobile terminal screen control method, mobile terminal and computer readable storage medium
CN109557999A (en) * 2017-09-25 2019-04-02 北京小米移动软件有限公司 Bright screen control method, device and storage medium
CN109784028A (en) * 2018-12-29 2019-05-21 江苏云天励飞技术有限公司 Face unlocking method and relevant apparatus

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3115868A1 (en) * 2010-09-08 2017-01-11 Apple Inc. Camera-based orientation fix from portrait to landscape
CN106201296A (en) * 2015-04-30 2016-12-07 小米科技有限责任公司 Realize the method and device of false-touch prevention
US20170041455A1 (en) * 2015-08-06 2017-02-09 Lg Electronics Inc. Mobile terminal and method for controlling the same
CN106303021A (en) * 2016-08-12 2017-01-04 广东欧珀移动通信有限公司 Screen work condition control method and device
CN109557999A (en) * 2017-09-25 2019-04-02 北京小米移动软件有限公司 Bright screen control method, device and storage medium
CN108418953A (en) * 2018-02-05 2018-08-17 广东欧珀移动通信有限公司 The screen control method and device of terminal, readable storage medium storing program for executing, terminal
CN108919952A (en) * 2018-06-28 2018-11-30 郑州云海信息技术有限公司 A kind of control method, device, equipment and the storage medium of intelligent terminal screen
CN109195213A (en) * 2018-11-26 2019-01-11 努比亚技术有限公司 Mobile terminal screen control method, mobile terminal and computer readable storage medium
CN109784028A (en) * 2018-12-29 2019-05-21 江苏云天励飞技术有限公司 Face unlocking method and relevant apparatus

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
ZOLA: "《华为nova4怎么关闭防误触模式》", 《智能新闻首页》 *
杨舜元: "浅析手机中的模式识别原理", 《通讯世界》 *
赵艳秋: "从荣耀Magic看未来手机机器人", 《IT经理世界》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116708661A (en) * 2022-10-26 2023-09-05 荣耀终端有限公司 Audio call processing method and related electronic equipment
CN116708661B (en) * 2022-10-26 2024-05-03 荣耀终端有限公司 Audio call processing method, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN114125148B (en) 2022-06-24

Similar Documents

Publication Publication Date Title
CN113596242B (en) Sensor adjustment method and device, electronic equipment and storage medium
CN115866121A (en) Application interface interaction method, electronic device and computer-readable storage medium
CN110059686B (en) Character recognition method, device, equipment and readable storage medium
CN112527094A (en) Human body posture detection method and electronic equipment
WO2024016564A1 (en) Two-dimensional code recognition method, electronic device, and storage medium
CN111542802A (en) Method for shielding touch event and electronic equipment
CN114365482A (en) Large aperture blurring method based on Dual Camera + TOF
US20230351570A1 (en) Image processing method and apparatus
CN114070928B (en) Method for preventing false touch and electronic equipment
CN114510174A (en) Interface display method and electronic equipment
CN114125148B (en) Control method of electronic equipment operation mode, electronic equipment and readable storage medium
CN113723397B (en) Screen capturing method and electronic equipment
CN111249728B (en) Image processing method, device and storage medium
CN114205512A (en) Shooting method and device
CN115032640B (en) Gesture recognition method and terminal equipment
CN114283195B (en) Method for generating dynamic image, electronic device and readable storage medium
CN115437601A (en) Image sorting method, electronic device, program product, and medium
CN115150542B (en) Video anti-shake method and related equipment
CN113168257A (en) Method for locking touch operation and electronic equipment
CN110087002B (en) Shooting method and terminal equipment
CN115967851A (en) Quick photographing method, electronic device and computer readable storage medium
CN114245011B (en) Image processing method, user interface and electronic equipment
CN115827207B (en) Application program switching method and electronic equipment
CN116522400B (en) Image processing method and terminal equipment
CN116723382B (en) Shooting method and related equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant