CN117615440B - Mode switching method and related device - Google Patents

Mode switching method and related device Download PDF

Info

Publication number
CN117615440B
CN117615440B CN202410096056.5A CN202410096056A CN117615440B CN 117615440 B CN117615440 B CN 117615440B CN 202410096056 A CN202410096056 A CN 202410096056A CN 117615440 B CN117615440 B CN 117615440B
Authority
CN
China
Prior art keywords
mode
electronic device
image
ambient illuminance
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410096056.5A
Other languages
Chinese (zh)
Other versions
CN117615440A (en
Inventor
文琢
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202410096056.5A priority Critical patent/CN117615440B/en
Publication of CN117615440A publication Critical patent/CN117615440A/en
Application granted granted Critical
Publication of CN117615440B publication Critical patent/CN117615440B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72448User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions
    • H04M1/72454User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions according to context-related or environment-related conditions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W52/00Power management, e.g. TPC [Transmission Power Control], power saving or power classes
    • H04W52/02Power saving arrangements
    • H04W52/0209Power saving arrangements in terminal devices
    • H04W52/0251Power saving arrangements in terminal devices using monitoring of local events, e.g. events related to user activity
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Health & Medical Sciences (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Social Psychology (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Psychiatry (AREA)
  • Environmental & Geological Engineering (AREA)
  • Telephone Function (AREA)

Abstract

The application provides a mode switching method and a related device, relates to the technical field of terminals, and is beneficial to saving power consumption of electronic equipment. The method comprises the following steps: the electronic device collects at least two frames of first images in a first mode under first ambient illuminance; the electronic device switches from a first mode to a second mode in the case that the difference between at least two frames of first images is greater than a first threshold, the first mode being used to detect differences between the images, the second mode being used to identify objects in the images; the electronic device collects a second image in a second mode; the electronic device switches from the second mode to the first mode under the condition that the electronic device completes the object identification in the second image; the electronic equipment collects at least two frames of third images in the first mode under the second ambient illuminance; and the electronic equipment is switched from the first mode to the second mode under the condition that the difference of the at least two frames of third images is larger than a second threshold value, the second ambient illuminance is different from the first ambient illuminance, and the second threshold value is different from the first threshold value.

Description

Mode switching method and related device
Technical Field
The present application relates to the field of terminal technologies, and in particular, to a mode switching method and a related device.
Background
With the development of technology, electronic devices have more and more functions, and in some implementations, the electronic devices may collect surrounding environment images through an Always On (AO) camera, so as to wake up the electronic devices, unlock the electronic devices, and so on.
However, at present, taking the electronic equipment as an example, the electronic equipment sometimes has the situation of frequently being awakened by mistake, the electronic equipment has larger power consumption, sometimes has the situation of being difficult to be awakened normally, and the user experience is poor.
Disclosure of Invention
The embodiment of the application provides a mode switching method and a related device, which are beneficial to avoiding the situations that electronic equipment is frequently awakened by mistake in a high-brightness scene and is difficult to awaken in a low-brightness scene, saving the power consumption of the electronic equipment and improving the user experience.
In a first aspect, an embodiment of the present application provides a mode switching method, which is applied to an electronic device, where the method includes: under a first ambient illuminance, the electronic device acquires at least two frames of first images in a first mode; the electronic equipment is switched from the first mode to a second mode under the condition that the difference of the at least two frames of first images is larger than a first threshold value, wherein the first mode is used for detecting the difference between the images, and the second mode is used for identifying objects in the images; the electronic device acquires a second image in the second mode; the electronic equipment is switched from the second mode to the first mode under the condition that the electronic equipment completes the identification of the object in the second image; under a second ambient illuminance, the electronic device acquires at least two frames of third images in the first mode; and when the difference between the at least two frames of third images is larger than a second threshold, the electronic equipment is switched from the first mode to the second mode, the second ambient illuminance is different from the first ambient illuminance, and the second threshold is different from the first threshold.
In one possible implementation, the first mode may be a Motion Detection (MD) mode of an image sensor in the electronic device, in which the image sensor in the first ambient illuminance may continuously collect at least two frames of first images and perform difference detection on the collected historical images, and in case that the difference between the at least two frames of first images is greater than a first threshold value, the image sensor switches from the MD mode to a video transmission standard (VGA) mode. It will be appreciated that the MD mode differs from the VGA mode in that in the MD mode the image sensor captures an image but does not output, the captured image is differentially detected within the image sensor, whereas in the VGA mode the image sensor captures a second image and outputs the second image to the processor, which identifies the object in the captured second image. The second mode may be understood as a mode comprising the image sensor acquiring and outputting a second image to the processor, the processor identifying the object in the second image. In the event that the processor completes recognition of the object in the second image, the image sensor returns from VGA mode to MD mode.
In one possible implementation, the difference between at least two frames of images when difference detection is performed in MD mode may be characterized by the gray value of the image, whereas the gray value of the same object acquired under different ambient illuminances may not be an order of magnitude, making it difficult to judge the difference detection threshold of the image by the same difference threshold under different ambient illuminances.
In the embodiment of the application, the electronic equipment can realize the switching from the first mode to the second mode based on different difference thresholds under different ambient illuminance, and as the difference between the images detected by the electronic equipment in the first mode under different ambient illuminance may not be of the same order of magnitude, the difference threshold under any ambient illuminance can be more reasonable through the different ambient illuminance corresponding to the different difference thresholds. Taking the electronic equipment awakening operation by the first mode and the second mode as examples in the screen-off scene of the electronic equipment, the difference threshold values in the high-brightness scene and the low-brightness scene can be matched with the ambient illuminance, so that the situations that the electronic equipment is awakened frequently and wrongly in the low-brightness scene and the electronic equipment is difficult to awaken in the high-brightness scene are avoided, and the user experience is improved.
In certain implementations of the first aspect in combination with the first aspect, before the electronic device captures at least two frames of the first image, the method further includes: and determining the first threshold corresponding to the first ambient illuminance based on a first mapping table, wherein the first mapping table is used for indicating the mapping relation between the ambient illuminance and the difference threshold.
In the embodiment of the application, the first threshold value is determined based on the first ambient illuminance and the mapping relation between the ambient illuminance and the difference threshold value indicated in the first mapping table, on one hand, the difference threshold value is determined based on the first mapping table more quickly than the way of acquiring the difference threshold value in real time, and on the other hand, compared with the way of setting only one fixed difference threshold value regardless of ambient illuminance in the prior art, the way of determining the difference threshold value based on the current ambient illuminance is beneficial to making the setting of the difference threshold value more reasonable and further beneficial to maintaining the performance of the electronic equipment.
In certain implementations of the first aspect in combination with the first aspect, before the electronic device captures at least two frames of the third image, the method further includes: and under the condition that the second ambient illuminance exceeds a first range, determining a second threshold value corresponding to the second ambient illuminance based on a first mapping table, wherein the first ambient illuminance belongs to the first range.
In one possible implementation, one ambient illuminance interval in the first mapping table corresponds to one difference threshold. The first range may refer to an ambient illuminance interval to which the first ambient illuminance belongs in the first mapping table.
In the embodiment of the application, under the condition that the second ambient illuminance is also in the first range, the influence of the difference span of the second ambient illuminance and the first ambient illuminance on the difference threshold value can be considered to be within the error allowable range of the function realization of the electronic equipment, the first threshold value can not be updated, namely the electronic equipment still takes the first threshold value as the difference threshold value under the second ambient illuminance, so that the electronic equipment can not need to frequently update the difference threshold value, and the stability of the electronic equipment is facilitated to be maintained; under the condition that the second ambient illuminance exceeds the first range, the influence of the difference span of the second ambient illuminance and the first ambient illuminance on the difference threshold value can be considered to be beyond the error allowable range of the electronic equipment, and under the condition that the second threshold value is different from the first threshold value and is matched with the second ambient illuminance, the electronic equipment performs image difference detection more accurately based on the second threshold value under the second ambient illuminance, and the performance of the electronic equipment is improved.
In certain implementations of the first aspect in combination with the first aspect, the method further includes: under the condition that the first trigger frequency is not equal to the preset trigger frequency, determining a third threshold value corresponding to the first trigger frequency based on a second mapping table, updating the first threshold value into the third threshold value, wherein the second mapping table is used for indicating the mapping relation between the trigger frequency and the difference threshold value; the first trigger frequency is positively correlated with a first number of times the electronic device is switched from the second mode to the first mode under the first ambient illuminance, and/or is positively correlated with a first duration of time the electronic device is in the first mode under the first ambient illuminance.
In the embodiment of the application, the difference threshold value is updated based on the second mapping table by comparing the first trigger frequency with the preset trigger frequency under the condition that the first trigger frequency is different from the preset trigger frequency, so that the further adjustment of the difference threshold value under the first ambient illuminance is realized, the difference threshold value of the electronic equipment is more reasonable, the power consumption of the electronic equipment is saved, and meanwhile, the efficiency of functions such as gesture recognition and face recognition of the electronic equipment is improved, and the user experience is improved.
In a possible implementation manner, the third threshold is greater than the first threshold when the first trigger frequency is greater than a preset trigger frequency; and under the condition that the first trigger frequency is smaller than a preset trigger frequency, the third threshold value is smaller than the first threshold value.
It should be understood that the preset trigger frequency may reflect a reasonable value of the preset trigger frequency within a preset duration, and may be determined by a service end based on information such as a duration of the image sensor in the MD mode and the number of times the image sensor returns to the MD mode after entering the VGA mode from the MD mode, which is reported by a user, and the service end may be a service platform of a manufacturer of the device, and the application is not limited in particular.
In the embodiment of the application, the comparison of the first trigger frequency and the preset trigger frequency can determine whether the current difference threshold needs to be adjusted higher or lower, thereby being beneficial to quickly determining the adjustment strategy of the difference threshold and improving the operation efficiency of the electronic equipment.
In one possible implementation manner, the first trigger frequency is used to reflect how frequently the electronic device switches from the second mode to the first mode, and the relationship between the first trigger frequency and the first number of times and the first duration satisfies: the first trigger frequency is a product of the first duration and a first number of times.
In the embodiment of the application, in the running process of the electronic equipment, the first time length and the first time number are easy to count, and the first trigger frequency is calculated through the product of the first time length and the first time number, so that the difficulty of the difference threshold adjustment process is reduced, and the running efficiency of the electronic equipment is improved.
In certain implementations of the first aspect with reference to the first aspect, the electronic device includes an image sensor and a processor, and the electronic device acquires at least two frames of first images in a first mode, including: the image sensor acquires the at least two frames of first images in a first mode; the electronic device switching from the first mode to a second mode if the difference between the at least two frames of first images is greater than a first threshold, comprising: the image sensor switches from the first mode to a second mode when detecting that the difference between the at least two frames of first images is greater than a first threshold; the electronic device acquiring a second image in the second mode, comprising: the image sensor acquires the second image in the second mode and transmits the second image to the processor; the electronic device switching from the second mode to the first mode when the electronic device completes the recognition of the object in the second image, including: the electronic device switches from the second mode to the first mode if the processor completes recognition of the object in the second image.
In certain implementations of the first aspect in combination with the first aspect, the electronic device further includes an ambient light sensor, the method further including: the ambient light sensor collects ambient light information and determines the first ambient illuminance; the image sensor obtains the first ambient illuminance from the ambient light sensor.
In the embodiment of the application, the image sensor can acquire the first ambient illuminance from the ambient light sensor, which is beneficial to saving the computing power of the image sensor.
In certain implementations of the first aspect in combination with the first aspect, the method further includes: the image sensor acquires a fourth image and acquires a gray value of the fourth image; the image sensor determines the first ambient illuminance in a third mapping table based on the gray value of the fourth image, the third mapping table being used to indicate a mapping relationship between gray value and ambient illuminance.
It should be understood that the gray value may represent the brightness level, and the third mapping table may be determined based on the correspondence between the gray value and the ambient illuminance through a plurality of tests, and the third mapping table may be stored in an image sensor, other storage space in the electronic device, or a cloud server, which is limited in the present application.
In the embodiment of the application, the image sensor can determine the first ambient illuminance without interaction with other modules, which is beneficial to saving the communication power consumption among all components in the electronic equipment.
In a second aspect, an embodiment of the present application provides a mode switching device, where the mode switching device may be an electronic device, or may be a chip or a chip system in the electronic device. When the mode switching means is an electronic device, the processing unit may be a processor. The mode switching device may further include a storage unit, which may be a memory. The storage unit is configured to store instructions, and the processing unit executes the instructions stored in the storage unit, so as to enable the electronic device to implement a mode switching method described in the first aspect or any one of possible implementation manners of the first aspect. When the mode switching means is a chip or a system of chips within an electronic device, the processing unit may be a processor. The processing unit executes instructions stored by the storage unit to cause the electronic device to implement a mode switching method as described in the first aspect or any one of the possible implementations of the first aspect. The memory unit may be a memory unit (e.g., a register, a cache, etc.) within the chip, or a memory unit (e.g., a read-only memory, a random access memory, etc.) within the electronic device that is external to the chip.
In a third aspect, embodiments of the present application provide an electronic device comprising one or more processors and memory for storing code instructions, the one or more processors for executing the code instructions to perform the method described in the first aspect or any one of the possible implementations of the first aspect.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium having stored therein a computer program or instructions which, when run on a computer, cause the computer to perform the method described in the first aspect or any one of the possible implementations of the first aspect.
In a fifth aspect, embodiments of the present application provide a computer program product comprising a computer program which, when run on a computer, causes the computer to perform the method described in the first aspect or any one of the possible implementations of the first aspect.
In a sixth aspect, the present application provides a chip or chip system comprising at least one processor and a communication interface, the communication interface and the at least one processor being interconnected by wires, the at least one processor being adapted to execute a computer program or instructions to perform the method described in the first aspect or any one of the possible implementations of the first aspect. The communication interface in the chip can be an input/output interface, a pin, a circuit or the like.
In one possible implementation, the chip or chip system described above further includes at least one memory, where the at least one memory has instructions stored therein. The memory may be a memory unit within the chip, such as a register, a cache, etc., or may be a memory unit of the chip (e.g., a read-only memory, a random access memory, etc.).
It should be understood that, the second aspect to the sixth aspect of the present application correspond to the technical solutions of the first aspect of the present application, and the advantages obtained by each aspect and the corresponding possible embodiments are similar, and are not repeated.
Drawings
Fig. 1 is a schematic flow chart of an electronic device in a motion detection mode according to an embodiment of the present application;
FIG. 2 is a schematic view of a scenario provided in an embodiment of the present application;
FIG. 3 is a schematic view of another scenario provided in an embodiment of the present application;
fig. 4 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present application;
fig. 5 is a schematic software architecture diagram of an electronic device according to an embodiment of the present application;
FIG. 6 is a schematic flow chart of a mode switching method according to an embodiment of the present application;
Fig. 7 is a schematic diagram of an experimental scenario provided in an embodiment of the present application;
FIG. 8 is a schematic diagram of another experimental scenario provided in an embodiment of the present application;
FIG. 9 is a schematic flow chart of another mode switching method according to an embodiment of the present application;
FIG. 10 is a schematic flow chart diagram of yet another mode switching method provided by an embodiment of the present application;
fig. 11 is a schematic diagram of a chip structure according to an embodiment of the present application.
Detailed Description
In order to facilitate the clear description of the technical solutions of the embodiments of the present application, the following simply describes some terms and techniques involved in the embodiments of the present application:
1. Power on (AO) technology: the sensing technology of the autonomous user interface is realized based on the acquired information by letting the mobile device, the electronic device such as the IoT and the like recognize the change of the surrounding environment and the content thereof.
2. AO camera: the camera supporting the AO function, or may also be referred to as an image sensor supporting the AO function, an image sensor, or the like, the name of the device having the AO function is not particularly limited in the present application. The AO camera has the characteristic of low power consumption, and the AO camera can be in a normally open state. For example, when the electronic device is in a sleep state, a normal camera that does not have an AO function does not work, but the AO camera can still capture an image and simply process the image. Therefore, the electronic equipment can realize the services of space gesture control, screen-off face detection, one-touch sweeping and the like based on the AO camera.
3. Other terms
In embodiments of the present application, the words "first," "second," and the like are used to distinguish between identical or similar items that have substantially the same function and effect. For example, the first chip and the second chip are merely for distinguishing different chips, and the order of the different chips is not limited. It will be appreciated by those of skill in the art that the words "first," "second," and the like do not limit the amount and order of execution, and that the words "first," "second," and the like do not necessarily differ.
It should be noted that, in the embodiments of the present application, words such as "exemplary" or "such as" are used to mean serving as an example, instance, or illustration. Any embodiment or design described herein as "exemplary" or "for example" should not be construed as preferred or advantageous over other embodiments or designs. Rather, the use of words such as "exemplary" or "such as" is intended to present related concepts in a concrete fashion.
In the embodiments of the present application, "at least one" means one or more, and "a plurality" means two or more. "and/or", describes an association relationship of an association object, and indicates that there may be three relationships, for example, a and/or B, and may indicate: a alone, a and B together, and B alone, wherein a, B may be singular or plural. The character "/" generally indicates that the context-dependent object is an "or" relationship. "at least one of" or the like means any combination of these items, including any combination of single item(s) or plural items(s). For example, at least one (one) of a, b, or c may represent: a, b, c, a-b, a-c, b-c, or a-b-c, wherein a, b, c may be single or plural.
4. Electronic equipment
The electronic equipment of the embodiment of the application can comprise handheld equipment, vehicle-mounted equipment and the like which can be awakened in a mode of gesture recognition, face recognition and the like. For example, some electronic devices are: a mobile phone, a tablet, a palmtop, a notebook, a mobile internet device (mobile INTERNET DEVICE, MID), a wearable device, a Virtual Reality (VR) device, an augmented reality (augmented reality, AR) device, a wireless terminal in industrial control (industrial control), a wireless terminal in unmanned (SELF DRIVING), a wireless terminal in teleoperation (remote medical surgery), a wireless terminal in smart grid (SMART GRID), a wireless terminal in transportation security (transportation safety), a wireless terminal in smart city (SMART CITY), a wireless terminal in smart home (smart home), a cellular phone, a cordless phone, a session initiation protocol (session initiation protocol, SIP) phone, a wireless local loop (wireless local loop, WLL) station, a personal digital assistant (personal DIGITAL ASSISTANT, PDA), a handheld device with wireless communication functionality, a computing device or other processing device connected to a wireless modem, a vehicle-mounted device, a wearable device, a terminal device in a 5G network, or a future evolved land mobile network (public land mobile network), and the like, without limiting the application.
By way of example, and not limitation, in embodiments of the application, the electronic device may also be a wearable device. The wearable device can also be called as a wearable intelligent device, and is a generic name for intelligently designing daily wear by applying wearable technology and developing wearable devices, such as glasses, gloves, watches, clothes, shoes and the like. The wearable device is a portable device that is worn directly on the body or integrated into the clothing or accessories of the user. The wearable device is not only a hardware device, but also can realize a powerful function through software support, data interaction and cloud interaction. The generalized wearable intelligent device includes full functionality, large size, and may not rely on the smart phone to implement complete or partial functionality, such as: smart watches or smart glasses, etc., and focus on only certain types of application functions, and need to be used in combination with other devices, such as smart phones, for example, various smart bracelets, smart jewelry, etc. for physical sign monitoring.
In addition, in the embodiment of the application, the electronic equipment can also be terminal equipment in an internet of things (internet of things, ioT) system, and the IoT is an important component of the development of future information technology, and the main technical characteristics of the IoT are that the article is connected with a network through a communication technology, so that the man-machine interconnection and the intelligent network of the internet of things are realized.
The electronic device in the embodiment of the application may also be referred to as: a terminal device, a User Equipment (UE), a Mobile Station (MS), a Mobile Terminal (MT), an access terminal, a subscriber unit, a subscriber station, a mobile station, a remote terminal, a mobile device, a user terminal, a wireless communication device, a user agent, a user equipment, or the like.
In an embodiment of the present application, the electronic device or each network device includes a hardware layer, an operating system layer running on top of the hardware layer, and an application layer running on top of the operating system layer. The hardware layer includes hardware such as a central processing unit (central processing unit, CPU), a memory management unit (memory management unit, MMU), and a memory (also referred to as a main memory). The operating system may be any one or more computer operating systems that implement business processes through processes (processes), such as a Linux operating system, a Unix operating system, an Android operating system, an iOS operating system, or a windows operating system. The application layer comprises applications such as a browser, an address book, word processing software, instant messaging software and the like.
As one possible scenario, when the electronic device is in the off-screen state, an image sensor (camera sensor) having an AO function operates in a low power consumption mode, which may be, for example, a Motion Detection (MD) mode. Fig. 1 exemplarily shows an operation procedure of the camera sensor in the MD mode. As shown in fig. 1, in the MD mode, the camera sensor may continuously collect n frames of images, and compare gray-scale value differences of at least two frames of images (for example, the image frame 1 and the image frame 2 shown in fig. 1) in the collected history frames to determine whether there is an action in the field of view, where the determination rule may be, for example: when the difference of the gray values of the image frames is detected to be larger than a preset difference threshold, the motion is considered to exist in the visual field, 1 is output as a trigger signal, and when the difference of the gray values of the image frames is detected to be smaller than the difference threshold, the motion is considered to be not exist in the visual field, and 0 is output. Under the condition that the gray value difference of the image frames is detected to be larger than a difference threshold, the camera sensor outputs a trigger signal, then the camera sensor enters a video transmission standard (video GRAPHICS ARRAY, VGA) mode, and in the VGA mode, the image sensor can collect and output images to a System On Chip (SOC) for realizing functions of face recognition, gesture recognition and the like.
In one possible implementation, in the MD mode, the camera sensor continuously captures image frames and processes the image frames in the camera sensor, and does not output the captured image frames, but when detecting that there is a difference in gray value of the image frames greater than a difference threshold, the camera sensor outputs a trigger signal to the SOC, and after receiving information from the SOC, for example, setting the resolution of an image output by the camera sensor, outputs a higher resolution image corresponding to the information to the SOC.
Optionally, the resolution of the image frame acquired by the camera sensor in the MD mode may be small, for example, may be 16×12, so as to save power consumption of the electronic device.
Optionally, the comparing mode of the camera sensor to the acquired image frames may be comparing the current image frame with the image frame closest to the preamble, or comparing the current image frame with any one or more image frames of the preamble.
In one possible implementation, the comparison of the acquired image frames by the camera sensor may be to compare the gray value of the corresponding pixel point in the currently acquired image frame with the image frame that has been obtained in the preamble. It should be appreciated that ambient brightness has a significant impact on the gray scale value of the image captured by the camera sensor, which can be characterized by ambient illuminance. For example, for the same motion change in the same scene, when the ambient illuminance is 1lux, the average gray value of two image frames obtained in the preset period may be changed from 10 to 8, the difference value is 2, and the change amplitude is 20%; however, in a scene with an ambient illuminance of 100lux, the average gray value of two image frames obtained in a preset period may vary from 200 to 180, where the difference value is 20, and the variation range is 10%. It can be seen that under different ambient illuminances, the same motion in the same scene changes, and the absolute data of the gray value change represented by the image frame acquired by the camera sensor is obviously different, but in some implementations, the difference threshold adopted for the difference comparison is a fixed value, which may be the case that the difference threshold cannot be compatible with scenes of different ambient illuminances. For example, in the above example, if the difference threshold is 10, the difference of the average gray values of two image frames obtained in the preset period of time is difficult to reach when the ambient illuminance is 1lux, but the difference threshold is very easy to reach in a scene where the ambient illuminance is 100 lux. Taking the electronic equipment awakening in the screen-off scene of the electronic equipment as an example, under the condition that the camera sensor is only provided with a fixed difference threshold value for the difference comparison of the image frames in the MD mode, the situation that the electronic equipment is difficult to awaken even if the user makes multiple actions in the low-brightness scene shown in fig. 2 and the situation that the user does not make actions but wakes up the electronic equipment by mistake in the high-brightness scene shown in fig. 3 can possibly occur, and the user experience is seriously affected.
In view of this, the present application provides a mode switching method and related apparatus, which can determine a difference threshold value matched with the ambient illuminance according to the real-time ambient illuminance, so that the threshold value for triggering the output signal of the image sensor can be adjusted according to the different ambient illuminance, which is favorable for reducing the situation that the electronic device wakes up by mistake in the high-brightness scene, saving the power consumption of the electronic device, and also is favorable for reducing the situation that the electronic device is difficult to wake up in the low-brightness scene, improving the performance of the electronic device, and improving the user experience.
In order to facilitate understanding of the embodiments of the present application, the following describes a hardware structure of an electronic device provided in the embodiments of the present application.
Fig. 4 shows a schematic structural diagram of an electronic device 400.
The electronic device 400 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (universal serial bus, USB) interface 130, a charge management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an ear-piece interface 170D, a sensor module 180, keys 190, a motor 191, an indicator 192, a camera 193, a display 194, and a subscriber identity module (subscriber identification module, SIM) card interface (not shown in fig. 4), etc. The sensor module 180 may include a pressure sensor 180A, a gyro sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
It should be understood that the illustrated structure of the embodiment of the present application does not constitute a specific limitation on the electronic device 400. In other embodiments of the application, electronic device 400 may include more or fewer components than shown, or may combine certain components, or split certain components, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The processor 110 may include one or more processing units, such as: the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (IMAGE SIGNAL processor, ISP), a controller, a video codec, a digital signal processor (DIGITAL SIGNAL processor, DSP), a baseband processor, and/or a neural-Network Processor (NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors.
In the embodiment of the present application, the processor 110 may include a System On Chip (SOC), and may be used to process an image output by a camera sensor, for example, may identify actions in the image, so as to implement services such as waking up an electronic device, unlocking a screen-off gesture, manipulating a space gesture, detecting a screen-off face, and sweeping when turning over.
The camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image onto the photosensitive element. In some embodiments, electronic device 400 may include 1 or N cameras 193, N being a positive integer greater than 1.
In the embodiment of the application, the camera 193 can include a camera sensor with an AO function, can run with low power consumption when the electronic equipment is in a dormant state, can continuously shoot images, can perform difference comparison on the shot images, and can output a trigger signal and switch to a VGA mode to output images to a processor under the condition that the difference value is larger than a threshold value, so that the electronic equipment can realize the services of waking up the electronic equipment by killing the screen, unlocking the screen, controlling the screen by using a screen-killing gesture, detecting the screen-killing face, sweeping when the screen-killing face, and the like.
The ambient light sensor 180L is used to sense ambient light level. In an embodiment of the present application, an ambient light sensor may be used to obtain ambient light information, determine ambient illuminance, and transmit the obtained ambient illuminance to the camera 193.
The software system of the electronic device 400 may employ a layered architecture, an event driven architecture, a microkernel architecture, a microservice architecture, or a cloud architecture. In the embodiment of the application, taking an Android system with a layered architecture as an example, a software structure of the electronic device 400 is illustrated.
Fig. 5 is a software architecture block diagram of an electronic device 400 according to an embodiment of the application.
The layered architecture divides the software into several layers, each with distinct roles and branches. The layers communicate with each other through a software interface. In some embodiments, the Android system is divided into four layers, namely, an application layer (applications), an application framework layer (application framework), a hardware abstraction layer (hardware abstract layer, HAL), and a kernel layer (kernel), which may be called a driver layer, from top to bottom. The electronic device further comprises a hardware layer.
The application layer may include a series of application packages. As shown in fig. 5, the application package may include applications such as cameras, system interfaces, and the like.
The application framework layer provides an application programming interface (application programming interface, API) and programming framework for the application of the application layer. The application framework layer includes a number of predefined functions. As shown in fig. 5, the application framework layer may include a camera access interface or the like. In some implementations, when a user clicks a camera application control through a display interface of an electronic device, a camera application of an application program layer may issue events such as startup, photographing, etc. to a hardware abstraction layer through a camera access interface of an application program framework layer.
The hardware abstraction layer may include a camera hardware abstraction and an algorithm library, wherein the algorithm library may include a gesture recognition algorithm module, a face recognition algorithm module, and the like. The camera hardware abstraction can provide a unified interface for inquiring hardware equipment for an upper-layer camera application, or can also provide data storage service for the upper-layer application; the gesture recognition algorithm module can perform gesture recognition on the image from the image sensor and output a gesture recognition result; the face recognition algorithm module can perform face recognition on the image from the image sensor and output a face recognition result.
Optionally, in the embodiment of the present application, other algorithm modules may be further included in the algorithm library and/or the hardware abstraction layer, and the gesture recognition algorithm module and the face recognition algorithm module may be the same module or different modules, which is not specifically limited in the present application.
The kernel layer is a layer between hardware and software. The kernel layer may include one or more of the following: an image sensor drive, an image processor drive, a display drive, and the like. The image sensor is used for driving the image sensor of the camera to collect images, the image processor is used for driving the image processor to process the images, and the display driver is used for driving the display screen to display an application interface or a system interface and the like.
The hardware layer may include a sensor, for example, an image sensor (also understood to be a camera), an ambient light sensor, a display screen, an image processor (graphics processing unit, GPU), and other hardware.
It should be understood that in some embodiments, layers that implement the same function may be referred to by other names, or layers that implement the functions of multiple layers may be taken as one layer, or layers that implement the functions of multiple layers may be divided into multiple layers. The embodiments of the present application are not limited in this regard.
The mode switching method according to the embodiment of the present application is described in detail below with reference to the modules in the software architecture shown in fig. 5 from the perspective of interaction between the modules.
Fig. 6 is a schematic flow chart illustrating a mode switching method 600 provided by an embodiment of the present application. The method 600 may be performed by an electronic device whose hardware architecture may be as shown in fig. 4 and whose software architecture may be as shown in fig. 5, but the application is not limited in this regard.
The method 600 includes the steps of:
The image sensor is in an always on working state after the electronic device is started, and the image sensor works in an MD mode under the condition that the user does not start the camera application.
S601, the image sensor acquires first ambient illuminance.
In the embodiment of the present application, the manner in which the image sensor obtains the first ambient illuminance may include the following two manners:
Mode one: acquired from an ambient light sensor. Specifically, the ambient light sensor acquires ambient light information (ambient LIGHT SENSE, ALS), which may be characterized by ambient illuminance, and the image sensor acquires a first ambient illuminance from the ambient light sensor.
Mode two: the image sensor acquires at least one frame of image, acquires the gray value of the image, and can find out the first ambient illuminance corresponding to the gray value of the image from the third mapping table based on the gray value of the image. It should be understood that the gray value may represent the brightness level, and the third mapping table may be determined based on the correspondence between the gray value and the ambient illuminance through a plurality of tests, and the third mapping table may be stored in an image sensor, other storage space in the electronic device, or a cloud server, which is limited in the present application.
S602, the image sensor determines a first threshold corresponding to the first ambient illuminance based on the first mapping table.
It should be understood that the first threshold is a threshold for detecting an image difference, which is determined based on the first ambient illuminance, and the first mapping table is used to represent a mapping relationship between the ambient illuminance and the difference threshold. Illustratively, the first mapping table may be obtained by the following experiment:
The first step: experimental conditions were set. The experimental conditions of the embodiment of the application can be, for example: the distance between the target object and the image sensor is a first distance, the target object is in a preset environment, and the frame rate of the image sensor in the MD mode is a preset frame rate. Alternatively, the target object may be an athletic character, the first distance may be 80cm, the preset environment may be an environment with a white curtain as a background, and the preset frame rate may be 10fps, but the present application is not particularly limited thereto.
And a second step of: and under the set experimental conditions and the set ambient illuminance, the target object makes any action for detection by the image sensor. Illustratively, the ambient illuminance may be set in a stepwise manner, for example, the ambient illuminance is set to 1lux, 2lux, 3lux … …, and fig. 7 and 8 illustrate exemplary actions of the target object, respectively. As shown in fig. 7, in the set ambient illuminance, the motion of the target object 702 may be switched from the state of arm drop shown by a in fig. 7 to the state of arm lift shown by b in fig. 7, and 701 in fig. 7 may be understood as a field of view of the image sensor, and 701 in the experiment may be a white curtain. The action of the target object 702 stepping into the field of view 701 of the image sensor from outside the field of view 701 of the image sensor is shown in fig. 8. It should be understood that the actions shown in fig. 7 may be directed to detecting arm or hand actions, the actions shown in fig. 8 are directed to detecting body actions, and the experimental actions in the embodiments of the present application may also be any actions of other target objects, which the present application is not limited to specifically.
In one possible implementation, in the foregoing experiment, the difference threshold that can enable the image sensor to send the trigger signal under any ambient illuminance may be: under the condition that the target object makes actions of preset times, the image sensor can send out the highest difference threshold value in the corresponding difference thresholds when triggering signals of preset times, so that false triggering caused by noise or other interference is restrained to the greatest extent. For example, under an ambient illuminance of 1lux, the target object performs 10 actions as shown in fig. 7, the image sensor may detect 10 image differences and send out 10 trigger signals "1", where the highest difference threshold of the difference thresholds corresponding to the 10 triggers is 1, and then in the first mapping table, the corresponding difference threshold under an ambient illuminance of 1lux may be 1. Alternatively, the difference threshold may be obtained by a difference in gray values of the images, which is not limited by the present application. Table one exemplarily shows experimental conditions and experimental results when experiments were performed with the actions as shown in fig. 7.
List one
Alternatively, the first mapping table may include the above experimental results, but the present application is not limited thereto.
S603, the image sensor collects at least two frames of first images, and the at least two frames of first images are detected through a preset difference comparison algorithm.
It should be understood that the at least two first images may be acquired by the image sensor in the MD mode, the preset difference comparison algorithm is an algorithm for detecting differences between images by the image sensor, and the image sensor may compare an image frame acquired at present with an image frame acquired at the time closest to the present image frame in the historical image frame, may compare any one or more images in the present acquired image frame and the historical image frame, and may compare gray value differences of a pixel point in the at least two images, or may be integral gray value differences of an entire image in the at least two images, which is not particularly limited in this aspect of the application.
It should also be appreciated that the image sensor continuously captures images and detects differences between images in MD mode, regardless of whether the differences between images reach a difference threshold, the image sensor may continuously capture ambient illuminance and dynamically adjust the difference threshold based on a first mapping table (e.g., methods S615-S617 described below), and capture trigger frequency and dynamically adjust the difference threshold based on a second mapping table (e.g., methods 900 described below), as the application is not limited in this regard.
In one possible implementation, the image sensor sends a trigger signal to the SOC if it detects that the difference of the at least two frames of the first image is greater than the first threshold, and in particular, the image sensor may perform S604.
S604, the image sensor sends a trigger signal (trigger) to the algorithm library, and correspondingly, the algorithm library receives the trigger signal.
It should be understood that the algorithm library runs in a hardware abstraction layer on the SOC, and the algorithm library may include a gesture recognition algorithm module and/or a face recognition algorithm module, and the algorithm library executes S605 after receiving the trigger.
Optionally, the output trigger signal may be a bit signal, for example, may be 1, when the image sensor detects that the difference between the at least two frames of the first image is greater than the first threshold, and may be 0, when the difference between the at least two frames of the first image is less than or equal to the first threshold, but the present application is not limited thereto, and any signal capable of expressing the above meaning is within the scope of the present application.
It should be appreciated that an image sensor output of 0 indicates that the image sensor has not detected a difference between images, and that the image sensor is still in MD mode.
S605, the algorithm library sends first information to the image sensor, and correspondingly, the image sensor receives the first information.
As a possible scenario, the electronic device is in the off-screen state, and after the algorithm library receives the trigger signal in S604, a power management module (not shown in fig. 5) in the electronic device may be further triggered to turn on the power of the display screen, and light the display screen, which may also be understood as a process of waking up the electronic device. Optionally, after the display screen is lightened, the algorithm library may perform step S605 and subsequent steps to implement functions such as unlocking the electronic device by using a space-free gesture, unlocking the face recognition, and the like.
As another possible scenario, the electronic device is in a bright screen state and is unlocked, after the algorithm library receives the trigger signal in S604, S605 may be executed to implement functions of blank gesture page turning, blank gesture video recording, blank gesture start/fast forward/backward/pause/switch/close video or music playing, blank gesture screen capturing, blank gesture on/off application, and the like through the gesture recognition function.
In one possible implementation, the first information may be configuration information (setting), which may be information of resolution, exposure parameters, etc. of an image required for the gesture recognition and/or face recognition function, and the image sensor performs S606 after receiving the configuration information.
S606, the image sensor acquires a second image based on the configuration information.
S607, the image sensor transmits a second image to the algorithm library, and correspondingly, the algorithm library receives the second image.
And S608, the algorithm library identifies the second image based on a preset algorithm.
In one possible implementation, the algorithm library may identify whether the second image includes a preset object based on a preset algorithm, and if it is identified that the second image includes the preset object, S609 is performed. In the case where it is recognized that the object in the second image does not contain the preset object, S614 is performed.
It can be understood that the preset object may refer to a preset gesture set in the gesture recognition function, or may refer to a face entered by a user and used for unlocking the mobile phone, which is not limited in the present application.
S609, determining a first picture by the algorithm library, wherein the first picture is a picture in which the object in the second image is a preset object and corresponds to the preset object.
It should be understood that, in the scene of screen-off unlocking the mobile phone, the first picture can be a system desktop, or can be a picture when the user locks the screen last time; in the face recognition payment scenario, the first picture may be a picture of success or failure of transaction corresponding to the payment application, and the specific content of the first picture is not limited in the application.
It should be further understood that the first frame may be determined independently by the algorithm library, or may be determined by the algorithm library through interaction with other modules, and the determining manner of the first frame is not specifically limited in the present application.
S610, the algorithm library outputs a first picture to the display driver, and correspondingly, the display driver receives the first picture.
S611, transmitting the first picture to a display screen by display driving.
S612, displaying a first picture on the display screen.
In one possible implementation, after the display screen displays the first screen, S613 is performed.
And S613, the display screen sends second information to the algorithm library through the display driver, and correspondingly, the algorithm library receives the second information.
Optionally, the second information may be used to indicate that the display is successful or failed, which is not limited in this regard by the present application.
S614, the algorithm library sends third information to the image sensor, and correspondingly, the image sensor receives the third information.
In one possible implementation, the third information may indicate that the recognition by the algorithm library based on the second image is completed (done), and may be used to instruct the image sensor to enter the MD mode from the VGA mode, and the image sensor subsequently executes S615.
S615, the image sensor acquires second ambient illuminance.
It should be understood that the manner in which the image sensor obtains the second ambient illuminance may be similar to the manner in which the first ambient illuminance is obtained in S601 described above, and will not be described here again.
In one possible implementation, the image sensor compares the first ambient illuminance with the second ambient illuminance, and in the event that it is determined that the second ambient illuminance is different from the first ambient illuminance, S616 is performed.
S616, judging whether the second ambient illuminance is within the ambient illuminance range to which the first ambient illuminance belongs. If yes, return to S615; if not, S617 is performed.
Optionally, the mapping relationship between the ambient illuminance and the difference threshold in the first mapping table is presented in the following two manners, but the present application is not limited thereto.
The first presentation mode: one ambient illuminance corresponds to one difference threshold, e.g., 1lux corresponds to 1,2lux corresponds to 2,3lux corresponds to 3, etc.
The second presentation mode: one ambient illuminance interval corresponds to one difference threshold, for example, a difference threshold corresponding to 1 to 1.99lux is 1, a difference threshold corresponding to 2 to 2.99lux is 2, a difference threshold corresponding to 3 to 3.99lux is 3, and so on.
For example, in the two presentation manners of the first mapping table, if the first ambient illuminance is 1.25 and the second ambient illuminance is 1.75, then the second ambient illuminance can be considered to be within an ambient illuminance interval to which the first ambient illuminance belongs, and the current difference threshold does not need to be updated, so that the stability of the electronic device is facilitated to be maintained; if the first ambient illuminance is 1.25 and the second ambient illuminance is 2.75, it is considered that the second ambient illuminance is not within the ambient illuminance range to which the first ambient illuminance belongs, and at this time, in the second presentation mode, the second threshold value may be 2, in the first presentation mode, the second threshold value may be 2 or 3, in this presentation mode, whether the second threshold value is an upward value or a downward value may be further set, the second threshold value may be 2 when the second threshold value is set to a downward value, and the second threshold value may be 3 when the second threshold value is set to an upward value, but the present application is not limited thereto.
S617, the image sensor determines a second threshold value corresponding to the second ambient illuminance based on the first map, and updates the first threshold value to the second threshold value.
It should be understood that, after the first threshold is updated to the second threshold, the image sensor may continuously acquire the ambient illuminance at the current moment, acquire an image, and detect the difference of the images, and the subsequent execution steps are similar to those of S602 to S617, which are not repeated.
In the implementation and provision of the application, the image sensor can update the difference threshold value through the acquired ambient illuminance, so that the difference threshold value corresponding to the trigger signal sent by the image sensor under different ambient illuminances can be not a fixed value, the difference threshold value determined based on the ambient illuminance is more reasonable relative to the preset fixed difference threshold value, the trigger information output by the image sensor for detecting the image difference is more accurate, thus in the screen-off scene of the electronic device, the situation that the electronic device is awakened by mistake and the power consumption of the electronic device is increased due to the fact that the preset fixed difference threshold value is too low and the current ambient illuminance is not matched is avoided, the situation that the electronic device is difficult to be awakened due to the fact that the preset fixed difference threshold value is too high and the current ambient illuminance is not matched in the brightness scene is also avoided, and in the scenes such as gesture recognition and face recognition, the like, the recognition accuracy and the user experience are also facilitated to be improved.
Further, after determining the difference threshold corresponding to a certain ambient illuminance, the difference threshold may be further adjusted based on the second mapping table.
Fig. 9 illustrates a schematic flow chart of a second mapping table-based threshold adjustment method 900, and the method 900 may be performed by the electronic device 400 described above, but the application is not limited thereto. In the embodiment of the present application, the image sensor execution method 900 in the electronic device is taken as an example for illustration.
The method 900 includes the steps of:
In one possible implementation manner, after the image sensor determines the first threshold by executing S601 to S602 in the above method 600, MD detection is performed based on the first threshold, and further, the image sensor may execute S901.
S901, an image sensor determines a first trigger frequency, wherein the first trigger frequency is used for reflecting the frequency of the image sensor switching from a VGA mode to an MD mode in a preset duration, and the first trigger frequency is positively correlated with a first number of times of the image sensor switching from the VGA mode to the MD mode under a first ambient illuminance and/or is positively correlated with a first duration of the image sensor in the MD mode under the first ambient illuminance.
In one possible implementation manner, the image sensor may record a first duration of time that the image sensor is in the MD mode within a preset duration of time and a first number of times that the image sensor returns to the MD mode after entering the VGA mode from the MD mode in a process that the ambient illuminance is the first ambient illuminance and the difference threshold is operated under the first threshold.
In one possible implementation, the first trigger frequency may be a product of the first time period and the first number of times, it being understood that the first trigger frequency may be a dimensionless physical quantity, i.e. its unit may not be considered when multiplying the first time period and the first number of times.
It should be understood that the preset duration may refer to any historical period of time during which the image sensor may operate under the condition that the ambient illuminance is the first ambient illuminance and the difference threshold is the first threshold, alternatively, the preset duration may be set to 10 seconds or any other value, which is not particularly limited by the present application. It should be understood that the image sensor may determine the trigger frequency within a preset time period for each operation under the conditions that the ambient illuminance is the first ambient illuminance and the difference threshold is the first threshold.
S902, judging whether the first trigger frequency is equal to the preset trigger frequency, if not, executing S903, and if so, returning to executing S901.
It should be understood that the preset trigger frequency may reflect a reasonable value of the preset trigger frequency within the preset duration, and may be determined by the service end based on the duration of the image sensor in the MD mode under any ambient illuminance, and the number of times the image sensor returns to the MD mode after entering the VGA mode from the MD mode, which are reported by the user, and then sent to the electronic device.
S903, determining a third threshold corresponding to the first trigger frequency based on a second mapping table, and updating the first threshold to the third threshold, wherein the second mapping table is used for indicating the mapping relation between the difference threshold and the trigger frequency.
Illustratively, the second mapping table may be derived based on the following:
the first step: the server establishes an initial mapping table which can indicate the difference threshold and the trigger frequency, and pushes the initial mapping table to the electronic equipment of the user. It should be understood that the values and the corresponding relations of the difference threshold and the trigger frequency in the initial mapping table may be experience data of a developer, and may be random data and random corresponding relations generated by a server, which is not limited in the present application.
And a second step of: and the electronic equipment reports the work log information of the image sensor in the MD mode and the VGA mode to the server. Optionally, the work log information may include a duration of the image sensor in the MD mode within a preset duration under a certain ambient illuminance, a number of times that the image sensor returns to the MD mode after entering the VGA mode from the MD mode, and user experience information such as power consumption of the image sensor, gesture recognition and/or accuracy of face recognition fed back by the user. It can be understood that the work log information is collected under the permission of the user, and a clear prompt exists in the electronic equipment before the information is sent to the mobile phone for the user to select whether to agree to collect the information.
And a third step of: the server analyzes the work log information reported by the electronic devices of the users and updates the initial mapping table into a second mapping table based on the analysis result. It should be understood that the purpose of analyzing the log information reported by the electronic devices of the multiple users is to find the correspondence between the trigger frequency and the difference threshold, and optionally, the analysis method includes, but is not limited to, a neural network algorithm, an artificial intelligence algorithm, and the like, and the present application is not limited to this analysis method specifically.
Fourth step: the electronic equipment receives the second mapping table from the server. It should be appreciated that the electronic device may update the mapping table at intervals by interaction with the server, i.e. the second mapping table is not constant but may be updated.
Optionally, the server may refer to a cloud server maintained by an electronic device manufacturer, where the cloud server may implement an overall upgrade of an electronic device system and/or a partial upgrade of a system function, but the application is not limited thereto specifically.
In one possible implementation, in case the first trigger frequency is greater than the preset trigger frequency, the third threshold is greater than the first threshold, i.e. the difference threshold under the first ambient illuminance is increased. It can be understood that the first trigger frequency is greater than the preset trigger frequency, which means that the trigger frequency of the electronic device is too high due to the first threshold value under the first ambient illuminance obtained based on the first mapping table, so that the situation that the image sensor frequently switches from the MD mode to the VGA mode, for example, after the user swings the electronic device, the image sensor outputs an image to the SOC, but the SOC identification result is NO and switches back to the MD mode may occur, so that the first threshold value is updated to a third threshold value greater than the first threshold value, the possibility that the image sensor switches from the MD mode to the VGA mode may be reduced, the number of times that the image sensor outputs the image to the SOC may be reduced, and the power consumption of the electronic device may be saved.
In another possible implementation, in case the first trigger frequency is smaller than the preset trigger frequency, the third threshold is smaller than the first threshold, i.e. the difference threshold at the first ambient illuminance is reduced. The first trigger frequency is smaller than the preset trigger threshold, which is understood that the trigger frequency of the electronic device is too low based on the first threshold under the first ambient illuminance determined by the first mapping table, that is, the small-amplitude motion in the video range of the image sensor should be detected but the difference threshold should be set too high to be detected, so that the first threshold is updated to be a third threshold lower than the first threshold, which can improve the sensitivity of the image sensor to the acquired image difference in the MD mode and is favorable for improving the performance of the electronic device.
In the embodiment of the application, the difference threshold value is updated based on the second mapping table by comparing the first trigger frequency with the preset trigger frequency under the condition that the first trigger frequency is different from the preset trigger frequency, so that the further adjustment of the difference threshold value under the first ambient illuminance is realized, the difference threshold value of the electronic equipment is more reasonable, the power consumption of the electronic equipment is saved, and meanwhile, the efficiency of functions such as gesture recognition and face recognition of the electronic equipment is improved, and the user experience is improved.
The mode switching method according to the embodiment of the present application is described in detail below by way of specific examples. The following embodiments may be combined with each other or implemented independently, and the same or similar concepts or processes may not be described in detail in some embodiments.
Fig. 10 is a schematic flowchart of a mode switching method 1000 according to an embodiment of the present application. The method 1000 may be performed by an electronic device, the hardware structure of which may be as shown in fig. 4, and the software structure of which may be as shown in fig. 5, but the application is not limited thereto.
The method 1000 comprises the steps of:
s1001, under a first ambient illuminance, the electronic device collects at least two frames of first images in a first mode.
S1002, when the difference of at least two frames of first images is larger than a first threshold value, the electronic device is switched from a first mode to a second mode, wherein the first mode is used for detecting the difference between the images, and the second mode is used for identifying objects in the images.
S1003, the electronic device collects a second image in a second mode.
S1004, when the electronic device completes the recognition of the object in the second image, the electronic device switches from the second mode to the first mode.
S1005, under the second ambient illuminance, the electronic device acquires at least two frames of third images under the first mode.
S1006, when the difference of the at least two frames of third images is larger than a second threshold, the electronic device is switched from the first mode to a second mode, the second ambient illuminance is different from the first ambient illuminance, and the second threshold is different from the first threshold.
In the embodiment of the application, the electronic equipment can realize the switching from the first mode to the second mode based on different difference thresholds under different ambient illuminance, and as the difference between the images detected by the electronic equipment in the first mode under different ambient illuminance may not be of the same order of magnitude, the difference threshold under any ambient illuminance can be more reasonable through the different ambient illuminance corresponding to the different difference thresholds. Taking the electronic equipment awakening operation by the first mode and the second mode as examples in the screen-off scene of the electronic equipment, the difference threshold values in the high-brightness scene and the low-brightness scene can be matched with the ambient illuminance, so that the situations that the electronic equipment is awakened frequently and wrongly in the low-brightness scene and the electronic equipment is difficult to awaken in the high-brightness scene are avoided, and the user experience is improved.
As an alternative embodiment, the electronic device includes an image sensor and a processor, the electronic device acquiring at least two frames of first images in a first mode, comprising: the image sensor acquires at least two frames of first images in a first mode; in the case that the difference between the at least two frames of the first images is greater than the first threshold, the electronic device switches from the first mode to the second mode, including: the method comprises the steps that when the image sensor detects that the difference of at least two frames of first images is larger than a first threshold value, the electronic equipment is switched from a first mode to a second mode; the electronic device captures a second image in a second mode, comprising: the image sensor acquires a second image in a second mode and transmits the second image to the processor; in the case where the electronic device completes recognition of the object in the second image, the electronic device switches from the second mode to the first mode, including: the electronic device switches from the second mode to the first mode upon the processor completing recognition of the object in the second image.
It should be understood that the first mode may be an MD mode of the image sensor always on in the electronic device, in which the image sensor in the first ambient illuminance may continuously collect at least two frames of the first image and perform difference detection on the collected historical images, and in case that the difference between the at least two frames of the first image is greater than the first threshold value, the image sensor is switched from the MD mode to the VGA mode. It will be appreciated that the MD mode differs from the VGA mode in that in the MD mode the image sensor captures an image but does not output, the captured image is differentially detected within the image sensor, whereas in the VGA mode the image sensor captures a second image and outputs the second image to the processor, which identifies the object in the captured second image. The second mode may be understood as a mode comprising the image sensor acquiring and outputting a second image to the processor, the processor identifying the object in the second image. In the event that the processor completes recognition of the object in the second image, the image sensor switches from VGA mode back to MD mode.
As an alternative embodiment, the manner in which the electronic device obtains the first ambient illuminance may be obtained by an ambient light sensor provided in the electronic device, or may be obtained based on the gray value of the fourth image collected by the image sensor and the third mapping table, which is similar to the description at S601 in the method 600 above, and will not be repeated here.
As an alternative embodiment, before the electronic device acquires the at least two frames of the first image, the method further comprises: and determining a first threshold corresponding to the first ambient illuminance based on a first mapping table, wherein the first mapping table is used for indicating the mapping relation between the ambient illuminance and the difference threshold.
In one possible implementation, the determination of the first mapping table may be similar to the description at S602 in the method 600 above, and will not be repeated here. It will be appreciated that the difference threshold corresponding to any ambient illuminance in the first mapping table is matched to the data magnitude of the image difference for that ambient illuminance.
In the embodiment of the application, the first threshold is determined based on the first ambient illuminance and the mapping relation between the ambient illuminance and the difference threshold indicated in the first mapping table, and compared with the mode of only setting one fixed difference threshold regardless of the ambient illuminance in the prior art, the method of determining the difference threshold based on the current ambient illuminance is beneficial to making the setting of the difference threshold more reasonable and further beneficial to maintaining the performance of the electronic equipment.
As an alternative embodiment, before the electronic device acquires the at least two frames of the third image, the method further comprises: and when the second ambient illuminance exceeds the first range, determining a second threshold corresponding to the second ambient illuminance based on the first mapping table, wherein the first ambient illuminance belongs to the first range.
In one possible implementation, one ambient illuminance interval in the first mapping table corresponds to one difference threshold. The first range may refer to an ambient illuminance interval to which the first ambient illuminance belongs in the first mapping table.
In the embodiment of the application, under the condition that the second ambient illuminance is also in the first range, the influence of the difference span of the second ambient illuminance and the first ambient illuminance on the difference threshold value can be considered to be within the error allowable range of the function realization of the electronic equipment, the first threshold value can not be updated, namely the electronic equipment still takes the first threshold value as the difference threshold value under the second ambient illuminance, so that the electronic equipment can not need to frequently update the difference threshold value, and the stability of the electronic equipment is facilitated to be maintained; under the condition that the second ambient illuminance exceeds the first range, the influence of the difference span of the second ambient illuminance and the first ambient illuminance on the difference threshold value can be considered to be beyond the error allowable range of the electronic equipment, and under the condition that the second threshold value is different from the first threshold value and is matched with the second ambient illuminance, the electronic equipment performs image difference detection more accurately based on the second threshold value under the second ambient illuminance, and the performance of the electronic equipment is improved.
As an alternative embodiment, the method further comprises: under the condition that the first trigger frequency is not equal to the preset trigger frequency, determining a third threshold value corresponding to the first trigger frequency based on a second mapping table, updating the first threshold value into the third threshold value, wherein the second mapping table is used for indicating the mapping relation between the trigger frequency and the difference threshold value; the first trigger frequency is positively correlated with a first number of times the electronic device switches from the second mode to the first mode under the first ambient illuminance, and/or is positively correlated with a first duration of time the electronic device is in the first mode under the first ambient illuminance.
In one possible implementation, the third threshold is greater than the first threshold in the case where the first trigger frequency is greater than the preset trigger frequency; and under the condition that the first trigger frequency is smaller than the preset trigger frequency, the third threshold value is smaller than the first threshold value.
In one possible implementation manner, the first trigger frequency is used to reflect a frequency of switching the electronic device from the second mode to the first mode, and the relationship between the first trigger frequency and the first number of times and the first duration satisfies: the first trigger frequency is a product of a first duration and a first number of times.
It should be understood that the preset trigger frequency may reflect a reasonable value of the preset trigger frequency within the preset duration, and may be determined by the service end based on the duration of the image sensor in the MD mode under any ambient illuminance, and the number of times the image sensor returns to the MD mode after entering the VGA mode from the MD mode, which are reported by the user, and then sent to the electronic device.
Alternatively, the manner of determining the second mapping table may be similar to that described at S903 in the method 900 above, and will not be described here again.
In the embodiment of the application, the difference threshold value is updated based on the second mapping table by comparing the first trigger frequency with the preset trigger frequency under the condition that the first trigger frequency is different from the preset trigger frequency, so that the further adjustment of the difference threshold value under the first ambient illuminance is realized, the difference threshold value of the electronic equipment is more reasonable, the power consumption of the electronic equipment is saved, and meanwhile, the efficiency of functions such as gesture recognition and face recognition of the electronic equipment is improved, and the user experience is improved.
It should be noted that, the user information (including but not limited to user equipment information, user personal information, etc.) and the data (including but not limited to data for analysis, stored data, presented data, etc.) related to the embodiments of the present application are information and data authorized by the user or fully authorized by each party, and the collection, use and processing of the related data need to comply with the related laws and regulations and standards of the related country and region, and provide a corresponding operation entry for the user to select authorization or rejection.
It should be noted that, the names of the modules according to the embodiments of the present application may be defined as other names, so long as the functions of each module can be implemented, and the names of the modules are not specifically limited.
It should be appreciated that, in order to implement the functions described in the above embodiments, the electronic device may include corresponding hardware structures and/or software modules that perform the respective functions. Those of skill in the art will readily appreciate that the present application may be implemented in hardware or a combination of hardware and computer software, as the method steps of the examples described in connection with the embodiments disclosed herein. Whether a function is implemented as hardware or computer software driven hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The embodiment of the application can divide the functional modules of the device for realizing the method according to the method example, for example, each functional module can be divided corresponding to each function, and two or more functions can be integrated in one processing module. The integrated modules may be implemented in hardware or in software functional modules. It should be noted that, in the embodiment of the present application, the division of the modules is schematic, which is merely a logic function division, and other division manners may be implemented in actual implementation.
Fig. 11 is a schematic structural diagram of a chip according to an embodiment of the present application. Chip 1100 includes one or more (including two) processors 1101, communication lines 1102, a communication interface 1103, and a memory 1104.
In some implementations, the memory 1104 stores the following elements: executable modules or data structures, or a subset thereof, or an extended set thereof.
The method described in the above embodiments of the present application may be applied to the processor 1101 or implemented by the processor 1101. The processor 1101 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuitry in hardware in the processor 1101 or instructions in software. The processor 1101 may be a general purpose processor (e.g., a microprocessor or a conventional processor), a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gates, transistor logic, or discrete hardware components, and the processor 1101 may implement or perform the methods, steps, and logic diagrams related to the disclosed processes in the embodiments of the present application.
The steps of the method disclosed in connection with the embodiments of the present application may be embodied directly in the execution of a hardware decoding processor, or in the execution of a combination of hardware and software modules in a decoding processor. The software modules may be located in any well-known storage medium such as ram, rom, or EEPROM (ELECTRICALLY ERASABLE PROGRAMMABLE READ ONLY MEMORY, EEPROM). The storage medium is located in the memory 1104, and the processor 1101 reads information in the memory 1104 and performs the steps of the above method in combination with its hardware.
The processor 1101, the memory 1104, and the communication interface 1103 may communicate with each other via a communication line 1102.
In the above embodiments, the instructions stored by the memory for execution by the processor may be implemented in the form of a computer program product. The computer program product may be written in the memory in advance, or may be downloaded in the form of software and installed in the memory.
In an embodiment of the present application, the chip 1100 may also be a chip system, for example: the System On Chip (SOC) is not limited in this regard.
The embodiment of the application also provides a chip system which is applied to the electronic equipment, wherein the chip system comprises one or more processors, and the one or more processors are used for calling the computer instructions to enable the electronic equipment to execute the method of the embodiment of the application.
An embodiment of the present application provides an electronic device, including: comprising the following steps: one or more processors and memory; the memory is coupled to one or more processors and the memory is used to store computer program code, which includes computer instructions that the one or more processors call to cause the electronic device to perform the methods of embodiments of the present application.
Embodiments of the present application also provide a computer-readable storage medium including computer instructions that, when executed on an electronic device, cause the electronic device to perform the above-described method. The methods described in the above embodiments may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. If implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer readable media can include computer storage media and communication media and can include any medium that can transfer a computer program from one place to another. The storage media may be any target media that is accessible by a computer.
In one possible implementation, the computer readable medium may include RAM, ROM, a compact disk-read only memory (CD-ROM) or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium targeted for carrying or storing the desired program code in the form of instructions or data structures and accessible by a computer. Also, any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (Digital Subscriber Line, DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. Disk and disc, as used herein, includes optical disc, laser disc, optical disc, digital versatile disc (DIGITAL VERSATILE DISC, DVD), floppy disk and blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media.
Embodiments of the present application provide a computer program product comprising computer program code which, when run on an electronic device, causes the electronic device to perform the above-described method.
Embodiments of the present application are described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processing unit of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processing unit of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The foregoing detailed description of the invention has been presented for purposes of illustration and description, and it should be understood that the foregoing is by way of illustration and description only, and is not intended to limit the scope of the invention.

Claims (12)

1. A mode switching method, applied to an electronic device, the method comprising:
Under a first ambient illuminance, the electronic device acquires at least two frames of first images in a first mode;
the electronic equipment is switched from the first mode to a second mode under the condition that the difference of the at least two frames of first images is larger than a first threshold value, wherein the first mode is used for detecting the difference between the images, and the second mode is used for identifying objects in the images;
the electronic device acquires a second image in the second mode;
the electronic equipment is switched from the second mode to the first mode under the condition that the electronic equipment completes the identification of the object in the second image;
Under a second ambient illuminance, the electronic device acquires at least two frames of third images in the first mode;
If the difference between the at least two frames of third images is greater than a second threshold, the electronic device switches from the first mode to the second mode, wherein the second ambient illuminance is different from the first ambient illuminance, and the second threshold is different from the first threshold;
the method further comprises the steps of:
Under the condition that the first trigger frequency is not equal to the preset trigger frequency, determining a third threshold value corresponding to the first trigger frequency based on a second mapping table, and updating the first threshold value to the third threshold value, wherein the second mapping table is used for indicating the mapping relation between the trigger frequency and the difference threshold value;
The first trigger frequency is positively correlated with a first number of times the electronic device is switched from the second mode to the first mode under the first ambient illuminance, and/or is positively correlated with a first duration of time the electronic device is in the first mode under the first ambient illuminance.
2. The method of claim 1, wherein prior to the electronic device capturing at least two frames of the first image, the method further comprises:
And determining the first threshold corresponding to the first ambient illuminance based on a first mapping table, wherein the first mapping table is used for indicating the mapping relation between the ambient illuminance and the difference threshold.
3. The method of claim 2, wherein prior to the electronic device capturing at least two frames of the third image, the method further comprises:
and under the condition that the second ambient illuminance exceeds a first range, determining a second threshold value corresponding to the second ambient illuminance based on a first mapping table, wherein the first ambient illuminance belongs to the first range.
4. The method according to claim 1, wherein the third threshold is greater than the first threshold in case the first trigger frequency is greater than a preset trigger frequency;
And under the condition that the first trigger frequency is smaller than a preset trigger frequency, the third threshold value is smaller than the first threshold value.
5. The method of claim 1, wherein the first trigger frequency is used to reflect how frequently the electronic device switches from the second mode to the first mode, and wherein the first trigger frequency is related to the first number of times and the first duration of time: the first trigger frequency is a product of the first duration and a first number of times.
6. The method of claim 1, wherein the electronic device comprises an image sensor and a processor, the electronic device capturing at least two frames of first images in a first mode, comprising:
the image sensor acquires the at least two frames of first images in a first mode;
The electronic device switching from the first mode to a second mode if the difference between the at least two frames of first images is greater than a first threshold, comprising:
The image sensor switches from the first mode to a second mode when detecting that the difference between the at least two frames of first images is greater than a first threshold;
the electronic device acquiring a second image in the second mode, comprising:
The image sensor acquires the second image in the second mode and transmits the second image to the processor;
The electronic device switching from the second mode to the first mode when the electronic device completes the recognition of the object in the second image, including:
The electronic device switches from the second mode to the first mode if the processor completes recognition of the object in the second image.
7. The method of claim 6, wherein the electronic device further comprises an ambient light sensor, the method further comprising:
The ambient light sensor collects ambient light information and determines the first ambient illuminance;
The image sensor obtains the first ambient illuminance from the ambient light sensor.
8. The method of claim 6, wherein the method further comprises:
The image sensor acquires a fourth image and acquires a gray value of the fourth image;
the image sensor determines the first ambient illuminance in a third mapping table based on the gray value of the fourth image, the third mapping table being used to indicate a mapping relationship between gray value and ambient illuminance.
9. An electronic device, the electronic device comprising: one or more processors and memory;
The memory is coupled with the one or more processors, the memory for storing computer program code comprising computer instructions that the one or more processors invoke to cause the electronic device to perform the method of any of claims 1-8.
10. A chip system for application to an electronic device, the chip system comprising one or more processors to invoke computer instructions to cause the electronic device to perform the method of any of claims 1 to 8.
11. A computer readable storage medium comprising computer instructions which, when run on an electronic device, cause the electronic device to perform the method of any one of claims 1 to 8.
12. A computer program product, characterized in that the computer program product comprises computer program code which, when run on an electronic device, causes the electronic device to perform the method of any one of claims 1 to 8.
CN202410096056.5A 2024-01-24 2024-01-24 Mode switching method and related device Active CN117615440B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410096056.5A CN117615440B (en) 2024-01-24 2024-01-24 Mode switching method and related device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410096056.5A CN117615440B (en) 2024-01-24 2024-01-24 Mode switching method and related device

Publications (2)

Publication Number Publication Date
CN117615440A CN117615440A (en) 2024-02-27
CN117615440B true CN117615440B (en) 2024-05-24

Family

ID=89952086

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410096056.5A Active CN117615440B (en) 2024-01-24 2024-01-24 Mode switching method and related device

Country Status (1)

Country Link
CN (1) CN117615440B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112052726A (en) * 2020-07-28 2020-12-08 北京极豪科技有限公司 Image processing method and device
CN114758268A (en) * 2022-03-17 2022-07-15 深圳市优必选科技股份有限公司 Gesture recognition method and device and intelligent equipment
CN115525140A (en) * 2021-06-25 2022-12-27 北京小米移动软件有限公司 Gesture recognition method, gesture recognition apparatus, and storage medium
CN116301363A (en) * 2023-02-27 2023-06-23 荣耀终端有限公司 Space gesture recognition method, electronic equipment and storage medium
CN116363722A (en) * 2021-12-28 2023-06-30 Oppo广东移动通信有限公司 Target recognition method, device and storage medium
CN117130469A (en) * 2023-02-27 2023-11-28 荣耀终端有限公司 Space gesture recognition method, electronic equipment and chip system

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112532892B (en) * 2019-09-19 2022-04-12 华为技术有限公司 Image processing method and electronic device
US20220351402A1 (en) * 2021-04-29 2022-11-03 Microsoft Technology Licensing, Llc Ambient illuminance sensor system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112052726A (en) * 2020-07-28 2020-12-08 北京极豪科技有限公司 Image processing method and device
CN115525140A (en) * 2021-06-25 2022-12-27 北京小米移动软件有限公司 Gesture recognition method, gesture recognition apparatus, and storage medium
CN116363722A (en) * 2021-12-28 2023-06-30 Oppo广东移动通信有限公司 Target recognition method, device and storage medium
CN114758268A (en) * 2022-03-17 2022-07-15 深圳市优必选科技股份有限公司 Gesture recognition method and device and intelligent equipment
CN116301363A (en) * 2023-02-27 2023-06-23 荣耀终端有限公司 Space gesture recognition method, electronic equipment and storage medium
CN117130469A (en) * 2023-02-27 2023-11-28 荣耀终端有限公司 Space gesture recognition method, electronic equipment and chip system

Also Published As

Publication number Publication date
CN117615440A (en) 2024-02-27

Similar Documents

Publication Publication Date Title
WO2018121428A1 (en) Living body detection method, apparatus, and storage medium
CN111738122B (en) Image processing method and related device
CN108399349B (en) Image recognition method and device
US11910197B2 (en) Service processing method and device
WO2021219095A1 (en) Living body detection method, and related device
CN111209904A (en) Service processing method and related device
CN111371938A (en) Fault detection method and electronic equipment
CN113887264B (en) Code scanning method, system and related device
CN114553814B (en) Method and device for processing push message
CN113971271A (en) Fingerprint unlocking method and device, terminal and storage medium
WO2024055764A1 (en) Image processing method and apparatus
CN117615440B (en) Mode switching method and related device
CN113742460A (en) Method and device for generating virtual role
CN116048831B (en) Target signal processing method and electronic equipment
CN114125148A (en) Control method of electronic equipment operation mode, electronic equipment and readable storage medium
CN115623318B (en) Focusing method and related device
CN115437601B (en) Image ordering method, electronic device, program product and medium
CN116196621B (en) Application processing method and related device
CN114115772B (en) Method and device for off-screen display
CN116709023B (en) Video processing method and device
CN117273687B (en) Card punching recommendation method and electronic equipment
CN115792431B (en) Abnormal position detection method and electronic equipment
CN115175164B (en) Communication control method and related device
CN116027933B (en) Method and device for processing service information
CN117711014A (en) Method and device for identifying space-apart gestures, electronic equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant