WO2022041220A1 - 电子装置和电子装置的响应操作方法 - Google Patents

电子装置和电子装置的响应操作方法 Download PDF

Info

Publication number
WO2022041220A1
WO2022041220A1 PCT/CN2020/112588 CN2020112588W WO2022041220A1 WO 2022041220 A1 WO2022041220 A1 WO 2022041220A1 CN 2020112588 W CN2020112588 W CN 2020112588W WO 2022041220 A1 WO2022041220 A1 WO 2022041220A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
processor
electronic device
signal
controller
Prior art date
Application number
PCT/CN2020/112588
Other languages
English (en)
French (fr)
Inventor
洪羽萌
葛振华
陈松林
姚家雄
杨琪
刘翠君
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to EP20950889.4A priority Critical patent/EP4195628A4/en
Priority to PCT/CN2020/112588 priority patent/WO2022041220A1/zh
Priority to CN202080011167.1A priority patent/CN114531947A/zh
Publication of WO2022041220A1 publication Critical patent/WO2022041220A1/zh
Priority to US18/176,261 priority patent/US20230205296A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/16Constructional details or arrangements
    • G06F1/1613Constructional details or arrangements for portable computers
    • G06F1/1633Constructional details or arrangements of portable computers not specific to the type of enclosures covered by groups G06F1/1615 - G06F1/1626
    • G06F1/1684Constructional details or arrangements related to integrated I/O peripherals not covered by groups G06F1/1635 - G06F1/1675
    • G06F1/1686Constructional details or arrangements related to integrated I/O peripherals not covered by groups G06F1/1635 - G06F1/1675 the I/O peripheral being an integrated camera
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/26Power supply means, e.g. regulation thereof
    • G06F1/32Means for saving power
    • G06F1/3203Power management, i.e. event-based initiation of a power-saving mode
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J1/00Photometry, e.g. photographic exposure meter
    • G01J1/42Photometry, e.g. photographic exposure meter using electric radiation detectors
    • G01J1/4204Photometry, e.g. photographic exposure meter using electric radiation detectors with determination of ambient light
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/26Power supply means, e.g. regulation thereof
    • G06F1/32Means for saving power
    • G06F1/3203Power management, i.e. event-based initiation of a power-saving mode
    • G06F1/3206Monitoring of events, devices or parameters that trigger a change in power modality
    • G06F1/3215Monitoring of peripheral devices
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/26Power supply means, e.g. regulation thereof
    • G06F1/32Means for saving power
    • G06F1/3203Power management, i.e. event-based initiation of a power-saving mode
    • G06F1/3206Monitoring of events, devices or parameters that trigger a change in power modality
    • G06F1/3228Monitoring task completion, e.g. by use of idle timers, stop commands or wait commands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/26Power supply means, e.g. regulation thereof
    • G06F1/32Means for saving power
    • G06F1/3203Power management, i.e. event-based initiation of a power-saving mode
    • G06F1/3234Power saving characterised by the action undertaken
    • G06F1/3243Power saving in microcontroller unit
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/0304Detection arrangements using opto-electronic means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/0007Image acquisition
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/65Control of camera operation in relation to power supply
    • H04N23/651Control of camera operation in relation to power supply for reducing power consumption by affecting camera operations, e.g. sleep mode, hibernation mode or power off of selective parts of the camera
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Definitions

  • the embodiments of the present application relate to the field of electronic science and technology, and in particular, to an electronic device and a response operation method of the electronic device.
  • AI Artificial Intelligence
  • AI technology has penetrated into various fields, such as in the field of biometric identification, intelligent robot field, medical field and autonomous driving field, making the technology in various fields advance by leaps and bounds.
  • AI technologies usually include biometrics, speech recognition, speech synthesis, and image processing.
  • the terminal field relying on AI technology, it is possible to provide users with services such as smart payment or smart screen lighting.
  • the terminal can further respond to users. Therefore, the bright screen service is a service that facilitates subsequent response operations or interactions so that the user can continue to use the terminal.
  • the smart screen lighting service applied to the terminal usually includes touch lighting and non-contact lighting. Touchless screen lighting means that the user can light up the screen of the electronic device without touching the electronic device.
  • the electronic device in order to be able to light up the screen in a timely and accurate manner when the user triggers the screen to light up, the electronic device usually needs to run sensors and detection models in real time to detect user actions.
  • the terminal can light up the screen, in addition to lighting the screen, once the preset condition is satisfied, the terminal can also trigger one or more other response operations for the user, such as a voice response.
  • the terminal can also trigger one or more other response operations for the user, such as a voice response.
  • running sensors and detection models in real time severely increases the power consumption of electronic devices and reduces the battery life of electronic devices. Therefore, how to reduce the power consumption of the electronic device while ensuring that the terminal can quickly respond to operations has become a problem that needs to be solved.
  • the electronic device and the responsive operation method of the electronic device provided by the present application can reduce the power consumption of the electronic device under the condition of effectively performing the responsive operation.
  • the present application adopts the following technical solutions.
  • an embodiment of the present application provides an electronic device, the electronic device includes: a controller, an image signal processor, a central processing unit, and an artificial intelligence AI processor; acquiring a first signal indicating a change in light intensity, and triggering a camera to collect a first image according to the first signal; the image signal processor, configured to receive the first image from the camera, and process the first image image to generate a second image, and provide the second image to the AI processor; the AI processor detects the second image to obtain an image detection result; the central processor is used to The image detection result performs a response operation.
  • the controller triggers the camera to collect images based on the light intensity change or the change amount of the ambient light intensity, and further triggers the electronic device to perform a response operation according to the image detection result, so that the camera is triggered by the light intensity change to work, saving the time of the electronic device. Therefore, the battery life of the electronic device can be improved under the condition that the electronic device can respond to operations.
  • the electronic device further includes: the ambient light sensor, configured to generate the first signal according to the light intensity change.
  • the electronic device further includes: the camera device, configured to acquire the first image according to a trigger of the controller.
  • the ambient light sensor and the camera device are integrated together to form the same module.
  • the controller is further configured to trigger the AI processor to detect the second image according to the first signal; and
  • the AI processor is configured to detect the second image according to the trigger of the controller to obtain the image detection result.
  • the AI processor is further configured to send a second signal for completing the detection of the second image to the controller; the controller is further configured to In response to the second signal, the AI processor is controlled to enter a low power consumption state.
  • an AI processor Since an AI processor performs a large amount of computational work, it usually has a large power consumption. In this implementation, the AI processor is triggered to work, and the rest of the time is in a low power consumption state, which can reduce the power consumption of the AI processor.
  • the image signal processor is further configured to send a third signal for completing the processing of the first image to the controller; the controller also uses In response to the third signal, the image signal processor is controlled to enter a low power consumption state.
  • the image processor is triggered to work, and the rest of the time is in a low power consumption state, so that the power consumption of the image processing apparatus can be reduced.
  • the controller is further configured to control the camera device to enter a low power consumption state in response to the third signal.
  • the low power consumption state includes at least one of the following: a standby state, a power-off state, or a sleep state.
  • controlling at least one component of the AI processor, the image signal processor, or the camera device to enter a low power consumption state includes at least one of the following: turning off corresponding The supply voltage of the component, turn off the clock of the corresponding component, reduce the frequency of the clock of the corresponding component, or reduce the supply voltage of the corresponding component.
  • the response operation includes: controlling the screen corresponding to the electronic device to light up.
  • the image detection performed by the artificial intelligence processor includes one of the following: facial image detection or gesture image detection.
  • an embodiment of the present application provides a response operation method for an electronic device.
  • the response operation method includes: using an ambient light sensor to obtain a first signal indicating a change in light intensity; triggering a camera to collect a first signal according to the first signal image; processing the first image to generate a second image; detecting the second image to obtain the image detection result; performing a response operation according to the image detection result.
  • the performing the detection on the second image to obtain the image detection result includes: triggering an AI processor to perform a detection on the second image according to the first signal. Detection is performed to obtain the image detection result.
  • the method further includes: controlling the AI processor to enter a low power consumption state.
  • the method further includes: controlling the image signal processor processing the first image to enter a low power consumption state.
  • the response operation includes: controlling the screen corresponding to the electronic device to light up.
  • the image detection performed by the artificial intelligence processor includes one of the following: facial image detection or gesture image detection.
  • an embodiment of the present application provides a chipset, including a controller, an image signal processor, a central processing unit, and an artificial intelligence AI processor.
  • the chipset includes one or more chips.
  • an embodiment of the present application provides an apparatus, the apparatus includes: a first signal acquisition module, configured to obtain a first signal indicating a change in light intensity by using an ambient light sensor; a control module, configured to obtain a first signal according to the first signal A signal triggers a camera to collect a first image; an image signal processing module is used to process the first image to generate a second image; an AI processing module is used to detect the second image to obtain the image detection result; A response operation module, configured to perform a response operation according to the image detection result.
  • control module is further configured to: trigger the AI processor to detect the second image according to the first signal to obtain the image detection result.
  • control module is further configured to: after the AI processor completes the detection of the second image, control the AI processor to enter a low power consumption state.
  • control module is further configured to: after processing the first image to generate the second image, control the image signal processor processing the first image to enter low power consumption condition.
  • the response operation includes: controlling the screen corresponding to the device to light up.
  • an embodiment of the present application provides an electronic device, the electronic device includes a memory and at least one processor, where the memory is used to store a computer program, and the at least one processor is configured to call the memory to store All or part of the computer program of the above-mentioned second aspect executes the method.
  • an embodiment of the present application provides a system-on-chip, where the system-on-chip includes at least one processor and an interface circuit, and the interface circuit is used to acquire a computer program from outside the chip system; the computer program is When executed, the at least one processor is used to implement the method described in the second aspect.
  • an embodiment of the present application provides a computer-readable storage medium, where a computer program is stored in the computer-readable storage medium, and when the computer program is executed by at least one processor, is used to implement the method described in the second aspect Methods.
  • an embodiment of the present application provides a computer program product, which is used to implement the method described in the second aspect above when the computer program product is executed by at least one processor.
  • FIG. 1a-1c are schematic diagrams of application scenarios applied to the embodiments of the present application.
  • 2a is a schematic diagram of a hardware structure of an electronic device provided by an embodiment of the present application.
  • FIG. 2b is a schematic diagram of components integrated in a system-on-chip provided by an embodiment of the present application.
  • FIG. 3 is a schematic diagram of a bitmap for waking up each component or instructing each component to enter a low power consumption state provided by an embodiment of the present application;
  • FIG. 4 is a schematic diagram of an interaction flow between components provided in an embodiment of the present application.
  • FIG. 5 is a flowchart of a gesture detection method applied in an AI processor provided by an embodiment of the present application
  • FIG. 6 is a flowchart of a face detection method applied in an AI processor provided by an embodiment of the present application
  • FIG. 7 is a schematic diagram of a software structure of an electronic device provided by an embodiment of the present application.
  • references herein to "first,” or “second,” and similar terms do not denote any order, quantity, or importance, but are merely used to distinguish the various components. Likewise, words such as “a” or “an” do not denote a quantitative limitation, but rather denote the presence of at least one. Words like “coupled” are not limited to physical or mechanical direct connections, but may include electrical connections, whether direct or indirect, equivalent to communication in a broad sense.
  • words such as “exemplary” or “for example” are used to represent examples, illustrations or illustrations. Any embodiments or designs described in the embodiments of the present application as “exemplary” or “such as” should not be construed as preferred or advantageous over other embodiments or designs. Rather, the use of words such as “exemplary” or “such as” is intended to present the related concepts in a specific manner.
  • the meaning of "plurality” refers to two or more. For example, a plurality of processors refers to two or more processors.
  • the electronic device may be an electronic device or a module, chip, chip set, circuit board or component integrated in the electronic device.
  • the electronic device may be a user equipment (User Equipment, UE), such as various types of portable devices such as a mobile phone, a tablet computer, or a wearable device (such as a smart watch).
  • UE User Equipment
  • the electronic equipment may be equipped with a screen and a camera. When the screen is off, the user can wake up and light up the screen by making a preset gesture in front of the camera, extending his palm in front of the camera, or pointing his face to the camera, that is, the image captured by the camera is electronically
  • the device analyzes or processes to trigger the lighting of the screen. As shown in Figure 1a-1c. Fig.
  • FIG. 1a exemplarily shows that when the screen is off, the user wakes up and lights the screen by extending his palm to the camera
  • Fig. 1b exemplarily shows that when the screen is off, the user extends his palm towards the camera to wake up and light up the screen
  • FIG. 1c exemplarily shows a situation where the user brings his face close to the camera device to wake up and light up the screen when the screen is off.
  • the lighting up screen described in the embodiments of the present application may refer to only lighting the screen and displaying the screen saver interface in some scenarios, but not unlocking the screen to enter the main interface; in other scenarios, it may be Point to light the screen and unlock it to enter the main interface. Specifically, which scenario can be determined according to the user's selection and the needs of the actual scenario.
  • the previous introduction took the lighting of the screen as an example to introduce the solution, but in practical applications, the image captured by the camera can be analyzed or processed to trigger other response operations of the electronic device, such as voice response.
  • the electronic device may play a preset piece of speech, such as music, when a preset gesture or face is detected.
  • the response operation is a corresponding operation made for the image detection or analysis result, and there can be many different implementation schemes.
  • This embodiment only takes the screen lighting as an example to illustrate, but the possible implementation schemes of the embodiment are not different. limited to this.
  • FIG. 2 a shows a schematic diagram of a hardware structure of an electronic device provided by an embodiment of the present application.
  • the electronic device 100 may specifically be a chip or a chip set, a circuit board mounted with the chip or a chip set, or an electronic device including the circuit board.
  • the specific electronic device is as described above, and is omitted here.
  • the chip or chip set or the circuit board on which the chip or chip set is mounted can be driven by necessary software. Subsequent embodiments are described by taking the electronic device 100 being the electronic device itself as an example, but it is not intended to limit the solution.
  • 1 a - 1 c includes one or more processors, such as a controller 101 , an AI processor 102 , a central processing unit (CPU) 107 and an image signal processor 104 .
  • the one or more processors may be integrated in one or more chips, which may be regarded as a chipset.
  • the multiple processors in FIG. 2a namely the controller 101, the AI processor 102, the central processing unit 107 and the image signal processor 104 are all integrated in the same chip, and the one or more processors are integrated in the same chip When inside the chip, the chip is also called a system-on-chip SOC (System on Chip).
  • FIG. 2b FIG.
  • the electronic device 100 also includes one or more other necessary components, such as a storage device 106 , a camera device 105 , and an ambient light sensor (ALS) 103 .
  • the controller 101 may be a dedicated processor in data communication with a central processor 107 in the electronic device.
  • the controller 101 may be an intelligent sensor hub (Sensor Hub) for collecting sensor data and controlling the sensor.
  • Sensor Hub intelligent sensor hub
  • Necessary software programs or software plug-ins such as operating system software and application software may run in the central processing unit 107 .
  • the central processing unit 107 is configured to execute a screen lighting application of the user interface (UI) class to perform a screen lighting operation, and the screen lighting application can provide screen lighting services for various application scenarios as shown in FIG. 1a-FIG. 1c .
  • UI user interface
  • the above screen lighting application may run when the electronic device is powered on, or may be run based on user settings (eg, the user instructs the screen lighting application to run by starting the screen lighting application).
  • Scenarios when the electronic device is powered on may include but are not limited to: the electronic device is powered on, the electronic device may be in a high power consumption state or a screen-on state after the power-on is started, and optionally, the electronic device may also enter standby after being powered on. state or low power state.
  • the user can also interact with the electronic device to customize various personalized services provided by the screen lighting application, such as selecting the hand or the face to light the screen, only displaying the screen saver interface after lighting the screen, or directly entering the main screen of the electronic device. interface, etc.
  • the controller 101 can be in a continuous power-on state, that is, a working state, to periodically detect whether there is a signal indicating a change in light intensity, and when a signal indicating a change in light intensity is detected, trigger FIG. 2a
  • the shown camera device 105 acquires the first image; the camera device 105 provides the acquired first image to the image signal processor 104; the image signal processor 104 processes the first image to generate a second image and provides it to the AI processor 102; The AI processor 102 detects the second image to obtain a detection result, and then provides the detection result to the central processing unit 107 .
  • the central processing unit 107 may perform the subsequent process of lighting the screen based on the image detection result of the AI processor 102 .
  • the signal indicating that the screen is turned on may be triggered by the ambient light sensor 103 based on a change in light intensity.
  • the ambient light sensor 103 can be used to sense the light intensity of the ambient light, which usually has a high sensitivity, and even a slight change in the light intensity can be sensed by the ambient light sensor 103 .
  • the ambient light sensor 103 can be used to sense the light intensity of the ambient light, which usually has a high sensitivity, and even a slight change in the light intensity can be sensed by the ambient light sensor 103 .
  • the ambient light sensor 103 can be used to sense the light intensity of the ambient light, which usually has a high sensitivity, and even a slight change in the light intensity can be sensed by the ambient light sensor 103 .
  • the ambient light sensor 103 can be used to sense the light intensity of the ambient light, which usually has a high sensitivity, and even a slight change in the light intensity can be sensed by the ambient light sensor 103 .
  • the ambient light sensor 103 can be captured
  • the ambient light sensor 103 When the ambient light sensor 103 detects a change in light intensity, it provides a signal indicating the change in light intensity to the controller 101, so that the controller 101 triggers other components to work to execute the screen lighting process. Specifically, the ambient light sensor 103 determines the light intensity value at a preset period, and when detecting a change in the light intensity value determined twice adjacently, writes information indicating the light intensity change to the first register in the SOC. For example, write "1" to the first register. In addition, the ambient light sensor 103 can also determine the change amount of the light intensity value, and then write the determined change amount information into the second register in the SOC. That is to say, different registers can be used for recording whether the light intensity has changed and the amount of the change.
  • the controller 101 can read information from the first register or the second register based on a preset period, and when the information read by the controller 101 from the first register or the second register indicates a change in light intensity, it can respond to the signal indicating the change in light intensity. Processing is performed to generate a signal that controls the camera device 105 to capture an image, and to send the signal to the camera device 105 to control the camera device 105 to capture an image.
  • the controller 101 generates a signal for controlling the camera 105 to capture an image, which may include two implementations. In a first implementation manner, the controller 101 may periodically read information from the first register. When the read information is used to indicate a change in light intensity (for example, the information is "1"), a signal for controlling the camera 105 to capture an image is generated.
  • the controller 101 may periodically read information from the second register. Then, the controller 101 compares the read information with a preset threshold, and when it is determined that the information read from the second register is greater than or equal to the preset threshold, generates a signal for controlling the camera 105 to capture images. Which implementation mode is adopted can be written into the controller 101 in advance through a software program.
  • the controller triggers the camera device to switch from the low power consumption state to the working state based on the light intensity change or the change amount information of the ambient light intensity, so that the electronic device performs subsequent image detection and further according to
  • the image detection result triggers the electronic device to perform a response operation, so that the camera device can be triggered by the change of light intensity to work, save the power of the electronic device, and help improve the battery life of the electronic device when the electronic device can respond to the operation.
  • triggering a component in this embodiment of the present application may be to control or instruct the component to start working, such as switching from a low power consumption state to a working state, including but not limited to turning on the power supply voltage of the component, turning on the clock of the component, Increase the frequency of the clock and increase the supply voltage and other operations.
  • the working state changes to a low power consumption state, which may include, but is not limited to, operations such as turning off the power supply voltage of the component, turning off the clock of the component, reducing the frequency of the clock, and reducing the power supply voltage.
  • the controller 101 may also control the camera 105 to enter a low power consumption state when the camera 105 completes its work. Therefore, the controller 101 triggers the camera device 105 to collect images based on the light intensity change or the change amount information of the light intensity, so that the camera device 105 can be in a low power consumption state when not working, save the power of the electronic equipment, and help improve the electronic equipment.
  • the battery life of the device In the low power consumption state, the camera 105 does not capture images; correspondingly, in the working state, the camera 105 works normally, that is, captures images.
  • the low power consumption states described in the embodiments of the present application may include, but are not limited to, a standby state, a power-off state, or a sleep state.
  • the transition from the low power consumption state to the working state may be understood as triggering or waking up as mentioned in the previous embodiment. Transitioning from the working state to the low-power state can be understood as stopping normal work.
  • the ambient light sensor 103 and using the ambient light sensor 103 to sense the change of the light intensity to trigger the controller 101 to execute the subsequent screen lighting process the user can trigger the subsequent screen lighting process without touching the electronic device, Improve user experience.
  • the controller 101 can also communicate with the image signal processor 104 and the AI processor 102 respectively to control the image signal processor 104 to enter a low power consumption state, wake up the AI processor 102 or control the AI processor 102 to enter a low power consumption state, and the like.
  • the wake-up described herein may include, but is not limited to: wake-up from a power-off state to a work state after power-on, or wake-up from a sleep state to a work state, or from a standby state to a work state.
  • the controller 101 may maintain a bitmap, which may be stored in a register, and each two bits in the bitmap represent a component.
  • FIG. 3 schematically shows a schematic diagram of a bitmap.
  • the first and second digits from the left represent the AI processor 102 .
  • the AI processor 102 When the bit representing the AI processor 102 is "00", the AI processor 102 is instructed to power off; when the bit representing the AI processor 102 is "11", the AI processor 102 is instructed to be powered on and enter the working state .
  • the third and fourth digits from the left represent the image signal processor 104 .
  • the image signal processor 104 When the bit used to represent the image signal processor 104 is "00”, the image signal processor 104 is instructed to power down.
  • the controller 101 may also communicate with the camera device 105 to trigger the camera device 105 to enter the working state or Enter sleep state. Specifically, in FIG. 3 , the fifth and sixth digits from the left may represent the camera 105 . When the bit used to represent the camera 105 is "10", the camera 105 is instructed to enter a sleep state.
  • the embodiment of the present application since the power states of each component that can be controlled by the controller 101 include more than two (the embodiment of the present application schematically shows three types of working state, power-off state and sleep state) ), so two bits are required to indicate the power state of one of the components.
  • the power modes of the components controlled by the controller 101 only include two types (eg, working state and power-off state)
  • a bitmap may be used to indicate the power state of one of the components.
  • the power state of each component that the controller 101 can control includes more than four types, three bits may be used to indicate the power state of one of the components in the bitmap. This embodiment of the present application does not limit this.
  • the controller 101 is only used to detect the signal indicating the change of light, control other components to enter the working state based on the signal indicating the change of light, and control the other components to enter the low power consumption state when the other components complete their work, etc. , it does not require a lot of computing work, so it usually has a small power consumption, even if it is in a working state for a long time, the impact on the overall power consumption of the electronic device is very small and can be approximately ignored.
  • controller 101 controls the power-on of each component when it needs to work, and controls it to power-off or enter a low-power consumption mode after each component is completed, so as to avoid excessive power consumption of the electronic device caused by the continuous operation of each component, so as to improve the power consumption of the electronic device.
  • the battery life of the device is not limited.
  • the image signal processor 104 may be used to perform image processing on the image acquired by the camera 105 .
  • the image processing may specifically include, but is not limited to, white balance correction, gamma (Gamma) correction, color correction, lens correction, or black level compensation, and the like.
  • the image signal processor 104 may acquire an image from the camera 105 , and then process the acquired image, and provide the processed image to the AI processor 102 .
  • a signal indicating that the image processing is completed may be sent to the controller 101. After receiving the signal, the controller 101 can control the image signal processor 104 to enter a low power consumption state.
  • the image processor 104 wakes up based on the image provided by the camera 105 to perform image processing, and instructs it to enter a low power consumption state through the controller 101 after the image processing is completed. It is in a working state during the working process, and is in a low power consumption state during the rest of the time, so that the power consumption of the image processor 104 can be further reduced, thereby reducing the power consumption of the electronic device and improving the battery life of the electronic device.
  • the AI processor 102 may include a dedicated processor such as a Neural-network Processing Unit (NPU). After receiving the power-on instruction from the controller 101, the AI processor 102 detects the acquired image to determine whether the object represented by the acquired image is the target object. The AI processor 102 may send the detection result to the central processor 107. Optionally, before receiving the detection result, the central processing unit 107 may be in a low power consumption state, and the AI processor 102 sends a control signal to the central processing unit 107 before or at the same time as sending the detection result to wake up the central processing unit 107, that is, control the central processing unit 107. The processor 107 enters the working state. The central processing unit 107 may determine whether to turn on the screen based on the detection result.
  • NPU Neural-network Processing Unit
  • the AI processor 102 can also send a signal indicating the completion of the image detection to the controller 101 after the image detection is completed, so that the controller 101 controls the AI processor to enter a low power consumption state in response to the signal of the completion of the image detection.
  • the AI processor 102 may run object detection models.
  • the target detection model is obtained by pre-training a neural network (for example, a standard neural network or a convolutional neural network, etc.) with training samples. That is to say, when the AI processor 102 is running, only the inference process of the target detection model is performed.
  • the object detection model is described in detail below through several scenarios.
  • the above target detection model may be a gesture detection model.
  • the gesture detection model may specifically include a hand detection model and a gesture classification model.
  • the gesture may include, but is not limited to, a palm-stretching gesture as shown in FIG. 1a , a scissors gesture as shown in FIG. 1b , or a fist-clenching gesture, and the like.
  • a palm extension gesture as an example, a large number of positive sample images showing the hand and negative sample images not showing the hand can be used, and a supervised training method can be used to train the first neural network to obtain a hand recognition model.
  • the model is used to detect whether there is a hand object in the image and the coordinate area of the hand object in the image; a large number of positive sample images showing palm extension gestures and negative sample images showing other gestures can be used, and the supervised training method can be used for the first part.
  • the second neural network is trained to obtain a gesture classification model.
  • the image acquired by the camera 105 is first transmitted to the hand detection model to detect whether a hand object appears in the image.
  • the hand detection model may output information indicating that the hand object is present in the image and a position area of the hand object in the image. Based on the output result, the AI processor can crop the hand image in the image and input it to the gesture classification model.
  • the gesture classification model is used to detect whether the gesture presented by the hand image is a palm extension gesture. Based on the detection result, the output is used for Information indicating whether it is a palm up gesture.
  • the user usually cannot customize gestures, and can only light up the screen through gestures that can be detected by the gesture detection model.
  • the lighting of the screen in this scenario can be to wake up the screen to display the screen saver interface, but it is not unlocked to display the screen saver interface. Enter the main interface.
  • the target detection model may be a face detection model.
  • the face detection model can be used to detect whether a face object is presented in the image, the pose information of the presented face object, and how many face objects are presented, etc.
  • a large number of sample images showing faces and sample images not showing faces can be used, and a supervised training method can be used to train a neural network to obtain a face detection model.
  • the above-mentioned sample images representing faces may also include, but are not limited to: sample images representing one face, sample images representing multiple faces, and sample images representing various facial poses.
  • the face detection model since the face detection model is trained by using the facial images of multiple users as positive sample images, it can usually only detect whether there is a facial image in the image, but cannot detect whether the object presented in the image is the owner.
  • lighting the screen may be to wake up the screen to display the screen saver interface, but not unlock to enter the main interface.
  • multiple target detection models may be set in the AI processor 102, for example, one of them is used to detect gestures, and the other is used to detect faces, so that the user can choose which method to use.
  • the screen lights up. But at runtime, it can run an object detection model.
  • the AI processor 102 runs the gesture detection model; when the user uses a face to light the screen, the AI processor 102 runs the face detection model.
  • the AI processor 102 may also perform feature extraction and comparison.
  • fingerprint or palm print comparison can be used to authenticate the owner.
  • the screen lighting application can drive the central processing unit to collect the fingerprint information or palmprint information of the owner through the camera device or the infrared sensor device as a follow-up comparison. template.
  • the AI processor 102 can first detect whether a predetermined gesture is present in the image by using the gesture detection model.
  • the palmprint presented in the collected image can be further compared with the pre-stored palmprint information, that is, the template, to determine the two match.
  • the face comparison method can be used to authenticate the owner.
  • the screen light-on application can drive the central processing unit to obtain the owner's face information collected by the camera or sensor device as a template for subsequent comparison.
  • the AI processor 102 can first use the face detection model to detect whether a face object is present in the image, how many face objects are present, and the pose information of the face object in the image.
  • the facial object presented in the collected image can be further compared with pre-stored facial information, that is, a template, to determine whether the two match.
  • the gesture for lighting up the screen may also be user-defined.
  • the screen lighting application can drive the central processing unit 107 to collect the gesture image of the user-defined gesture through the camera 105 as a subsequent comparison template.
  • the AI processor 102 may compare the object presented in the acquired image with the pre-stored gesture image, that is, the template, to determine whether the two match.
  • the controller 101 may generate a signal that triggers the camera 105 to capture an image and provide the signal to the camera 105 .
  • the camera 105 can acquire images by shooting. Then, the acquired image is supplied to the image signal processor 104 .
  • the controller 101 receives a signal indicating completion of image processing from the image signal processor 104, the controller 101 can also control the camera 105 to enter a low power consumption state.
  • the ambient light sensor 103 can be disposed inside the camera device 105, or the two can be integrated to form a module, that is, the ambient light sensor 103 can be selectively disposed in the above-mentioned In the camera device 105, this embodiment is not limited thereto.
  • the storage device 106 may include random access memory (RAM).
  • the random access memory can include volatile memory (such as SRAM, DRAM, DDR (Double Data Rate SDRAM, Double Data Rate SDRAM) or SDRAM, etc.) and non-volatile memory.
  • the RAM may store data (such as pre-saved user face information, user gesture images and user palmprint information, etc.) and parameters required for the operation of the AI processor 102, intermediate data generated by the operation of the AI processor 102, and the AI processor 102 The output results after running, etc.
  • the image signal processor 104 may also store the processed image in RAM.
  • the processed image, the pre-saved user face information, the user gesture image, the user palm print information, and the like may be acquired from the RAM.
  • the CPU 107 can obtain the output result of the AI processor 102 from RAM.
  • storage device 106 may also include read only memory ROM.
  • the read-only memory ROM may store executable programs of the controller 101 , the AI processor 102 , the image signal processor 104 and the central processing unit 107 . Each of the above components can perform their own work by loading an executable program.
  • the storage device 106 includes different types, and can be integrated in the above-mentioned first semiconductor chip SOC, or can be integrated in a second semiconductor chip in the electronic device 100 that is different from the first semiconductor chip SOC.
  • the electronic device 100 may further include a communication unit (not shown in the figure), where the communication unit includes but is not limited to a near field communication unit or a mobile communication unit.
  • the near field communication unit performs information exchange with a terminal located outside the mobile terminal for accessing the Internet by running a short-range wireless communication protocol.
  • the short-range wireless communication protocol may include, but is not limited to, various protocols supported by radio frequency identification technology, Bluetooth communication technology protocols, or infrared communication protocols.
  • the mobile communication unit is connected to the Internet by running the cellular wireless communication protocol and the wireless access network, so as to realize the information exchange between the mobile communication unit and the server supporting various applications in the Internet.
  • the communication unit may be integrated in the above-mentioned first semiconductor chip.
  • the electronic device 100 may optionally include a bus, an input/output port I/O, a memory controller, and the like.
  • the storage controller is used to control the storage device 106 .
  • the bus, the input/output port I/O, and the memory controller, etc. can be integrated with the above-mentioned controller 101 and the AI processor 102 in the above-mentioned first semiconductor chip. It should be understood that, in practical applications, the electronic device 100 may include more or less components than those shown in FIG. 2 a , which are not limited in this embodiment of the present application.
  • FIG. 4 shows a schematic sequence diagram of a screen lighting method 400 provided by an embodiment of the present application.
  • the screen lighting method is applied to the electronic device 100 shown in FIG. 2a.
  • the screen lighting method 400 may include the following steps: Step 401, the ambient light sensor 103 senses whether the light intensity changes. When a change in light intensity is sensed, a first signal is generated based on the change in light intensity. Step 402 , the ambient light sensor 103 provides the first signal to the controller 101 .
  • Step 403 the controller 101 detects whether the first signal is obtained from the ambient light sensor 103 .
  • the controller 101 detects the first signal, it sends a second signal to the camera 105 that triggers the camera 105 to capture an image.
  • the controller 101 does not detect the first signal, it may continue to detect until the first signal is detected.
  • Step 404 the camera 105 captures the first image in response to the second signal.
  • Step 405 the camera 105 provides the captured first image to the image signal processor 104 . It should be noted that the image captured by the camera 105 may include multiple frames.
  • Step 406 the image signal processor 104 processes the first image provided by the camera 105 to generate a second image.
  • Step 407 the image signal processor 104 stores the second image in the storage device 106 .
  • Step 408 after the image signal processor 104 completes the image processing, it sends a signal indicating the completion of the image processing to the controller 101 .
  • Step 409 the controller 101 sends a signal to the image signal processor 104 to instruct the image signal processor 104 to enter a low power consumption state in response to the signal sent by the image signal processor 104 for instructing the image signal processor 104 to complete image processing.
  • Step 410 the controller 101 sends a signal to the AI processor 102 to instruct the AI processor 102 to perform image detection.
  • Step 411 the controller 101 sends a signal to the camera 105 to instruct the camera 105 to enter a low power consumption state.
  • Step 412 the AI processor 102 acquires the second image stored by the image signal processor 104 from the storage device 106 in response to the signal sent by the controller 101 for instructing to perform image detection.
  • Step 413 the AI processor 102 detects the second image.
  • Step 414 the AI processor 102 stores the detection result in the storage device 106 .
  • Step 415 the AI processor 102 sends a signal that the image detection is completed to the controller 101 .
  • Step 416 the controller 101 sends a signal to the AI processor 102 to instruct the AI processor 102 to enter a low power consumption state in response to the signal indicating that the image detection is completed.
  • Step 417 the central processing unit 107 obtains the image detection result from the storage device 106 .
  • the central processing unit 107 controls the screen to turn on or keep the off-screen state based on the image detection result.
  • step 409 , step 410 and step 411 may be performed simultaneously; step 414 and step 415 may be performed simultaneously.
  • Embodiments of the present application may further include more or less steps than those shown in FIG. 4 .
  • the image signal processor 104 can directly store the second image Send to the AI processor 102, and the AI processor 102 can directly send the detection result to the central processing unit 107.
  • the controller 101 sends the signal indicating the image signal processor 104 to enter the low power consumption state shown in step 409 to the image signal processor 104
  • the image signal processor 104 can use the signal to enter the low power consumption state. Enter a low power state. The step of entering the low power consumption state of the image signal processor 104 is omitted in FIG. 4 .
  • FIG. 5 schematically shows a gesture detection method 500 for lighting up a screen by using a gesture of extending a palm.
  • the detection steps of the gesture detection method 500 include: Step 501 , acquiring an image stream.
  • the above-mentioned image stream may be the image stream captured by the camera 105 provided to the image signal processor 104, processed by the image signal processor 104, and obtained from the image signal processor 104, or may be obtained by the image signal processor 104.
  • the stream is stored in the storage device 106 from which the AI processor 102 can retrieve the image stream.
  • Step 502 Input each frame of images in the acquired image stream into the gesture detection model one by one, to obtain a detection result used to indicate whether there is a gesture of extending the palm in each frame of the image.
  • a detection result used to indicate whether there is a gesture of extending the palm in each frame of the image.
  • Step 503 providing the obtained detection result for indicating whether there is a gesture of extending the palm in each frame of image to the central processing unit 107 .
  • the central processor 107 can determine whether to turn on the screen based on the received gesture detection result.
  • the central processing unit 107 is preset with a judgment condition indicating whether to turn on the screen. For example, the palm-stretching gesture is presented in the image, and the number of images in which the palm-stretching gesture is continuously presented is greater than or equal to a preset number (eg, three frames).
  • the detection result provided by the AI processor 102 When the detection result provided by the AI processor 102 is used to indicate that the palm-stretching gesture is not present in the image, or the number of images continuously presenting the palm-stretching gesture is less than the preset number, it can be instructed to keep the screen off; when the AI processor The detection result provided in 102 is used to indicate that when the number of images in which the palm-stretching gesture is continuously presented is greater than or equal to the preset number, the screen can be instructed to light up to present the screen saver interface.
  • the user's palm by detecting the number of images continuously presenting the palm extension gesture, it can be determined whether the user's palm is in a hovering state. In some scenarios, the user's palm just shakes in front of the camera device 105. At this time, the user may not want to light up the screen, and the palm-stretching gesture happens to be captured by the camera device 105 and will present an image of the palm-stretching gesture. Provided to the AI processor 102, thereby triggering the central processing unit 107 to further light up the screen, reducing the user experience. By judging whether the continuous multi-frame image shows the gesture of extending the palm, the probability of the above-mentioned scene can be reduced, so that the best time to light up the screen can be provided, which is beneficial to improve the user experience.
  • the AI processor 102 may also perform step 504 .
  • Step 504 select a frame from at least one frame of the image showing the palm extension gesture, compare the palmprint of the palm extension gesture presented in the image with the pre-stored palmprint information, and provide the comparison result to the central processing unit 107.
  • the central processing unit 107 determines, based on the palmprint comparison result provided by the AI processor 102, whether further screen unlocking is required to enter the main interface.
  • the palmprint comparison result is used to indicate that the similarity value between the two is greater than or equal to the preset threshold, it can be instructed to further unlock the screen to present the main interface; when the palmprint comparison result is used to indicate the similarity between the two
  • the similarity value is less than the preset threshold, it can be instructed to prohibit screen unlocking.
  • the detection flow of the AI processor 102 is described by taking the scene in which the face lights up the screen as shown in FIG. 1c as an example.
  • FIG. 6 schematically shows a face detection method 600 using a face to light up the screen.
  • the detection steps of the face detection method 600 include: Step 601 , acquiring an image stream.
  • Step 601 acquiring an image stream.
  • the acquisition method of the image stream reference may be made to the acquisition method of step 501 shown in FIG. 5 , which will not be repeated here.
  • Step 602 Input each frame of images in the acquired image stream into the face detection model one by one to obtain a face detection result corresponding to each frame of the image.
  • the face detection result includes at least one of the following: whether a face image is presented in the image, how many face images are presented, and pose information of the presented face image.
  • the face detection model For the specific training method of the face detection model, reference may be made to the relevant description of the face detection model in the electronic device 100 shown in FIG. 2a , which will not be repeated here.
  • Step 603 providing the obtained face detection result to the central processing unit 107 .
  • the central processing unit 107 can determine whether to turn on the screen based on the received face detection result.
  • the central processing unit 107 may be preset with a judgment condition indicating whether to turn on the screen. For example, a face image is presented in the image, the pose of the presented face image is a preset angle range, and the number of images in which the face object is continuously presented is greater than or equal to a preset number (eg, three frames).
  • the central processor 107 may instruct to keep the screen off; when The face detection result provided by the AI processor 102 is used to indicate that the number of images continuously presented with facial objects is greater than or equal to a preset number, and the pose of the presented facial image is within a preset angle range, and can instruct to light up the screen to present Screen saver interface.
  • the AI processor 102 may also perform step 604 .
  • Step 604 select a frame from the at least one frame of the image in which the facial object is presented, compare the facial object presented in the image with the pre-stored facial information, and provide the comparison result to the central processing unit 107 .
  • the central processing unit 107 may also determine, based on the facial comparison result sent by the AI processor 102, whether further screen unlocking is required to present the main interface on the screen.
  • the central processor 107 may instruct to further unlock the screen to present the main interface; when the face comparison result is used to indicate that the two are either
  • the central processing unit 107 may instruct to prohibit unlocking the screen.
  • the user's face By detecting the number of images continuously presented with facial contours, it can be determined whether the user's face is in a hovering state. In some scenarios, the user's face just shakes in front of the camera 105, at this time the user may not want to light up the screen, but the user's face happens to be captured by the camera 105 and the image representing the facial object is provided to The AI processor, thereby triggering the central processing unit 107 to further light up the screen, degrades the user experience. By judging whether a face object is present in the continuous multi-frame images, the probability of the above-mentioned scene occurring can be reduced, thereby providing the best time to light up the screen, which is beneficial to improve user experience.
  • the electronic device includes corresponding hardware and/or software modules for executing each function.
  • the present application can be implemented in hardware or in the form of a combination of hardware and computer software in conjunction with the algorithm steps of each example described in conjunction with the embodiments disclosed herein. Whether a function is performed by hardware or computer software driving hardware depends on the specific application and design constraints of the technical solution. Those skilled in the art may use different methods to implement the described functionality for each particular application in conjunction with the embodiments, but such implementations should not be considered beyond the scope of this application.
  • the above one or more processors may be divided into functional modules according to the foregoing method examples.
  • each functional module may be divided corresponding to each function, or two or more functions may be integrated into one processing module.
  • the above-mentioned integrated modules can be implemented in the form of hardware. It should be noted that, the division of modules in this embodiment is schematic, and is only a logical function division, and there may be other division manners in actual implementation.
  • FIG. 7 shows a possible schematic diagram of the composition of the apparatus 700 involved in the above embodiment.
  • the apparatus 700 may include: a first signal acquisition Module 701 , control module 702 , image signal processing module 703 , AI processing module 704 and response operation module 705 .
  • the first signal acquisition module 701 is used to obtain a first signal indicating the change of light intensity by using the ambient light sensor; the control module 702 is used to trigger the camera to collect the first image according to the first signal; the image signal processing module 703 , for processing the first image to generate a second image; the AI processing module 704 for detecting the second image to obtain the image detection result; the response operation module 705 for detecting according to the image The result performs the response action.
  • control module 702 is further configured to: trigger the AI processor to detect the second image according to the first signal to obtain the image detection result.
  • control module 702 is further configured to: after the AI processor finishes detecting the second image, control the AI processor to enter a low power consumption state.
  • control module 702 is further configured to: after processing the first image to generate the second image, control the image signal processor processing the first image to enter a low power consumption state.
  • the response operation includes: controlling the screen corresponding to the apparatus 700 to light up.
  • the device 700 provided in this embodiment is configured to execute the response operation method executed by the electronic device 100, and can achieve the same effect as the above implementation method.
  • Each module corresponding to FIG. 7 above can be implemented in software, hardware or a combination of the two.
  • each module can be implemented in software, corresponding to a processor corresponding to the module in FIG. 2b, and used to drive the corresponding processor to work.
  • each module may include a corresponding processor and a corresponding driver software.
  • the apparatus 700 may comprise at least one processor and memory, with particular reference to Figure 2b.
  • at least one processor can call all or part of the computer program stored in the memory to control and manage the actions of the electronic device 100, for example, can be used to support the electronic device 100 to perform the steps performed by the above-mentioned modules.
  • the memory may be used to support the execution of the electronic device 100 by storing program codes and data, and the like.
  • the processor may implement or execute various exemplary logic modules described in connection with the present disclosure, which may be a combination of one or more microprocessors that implement computing functions, such as, but not limited to, the controller shown in Figure 2a 101 , an image signal processor 104 , an AI processor and a central processing unit 107 .
  • the processor may include other programmable logic devices, transistor logic devices, or discrete hardware components in addition to the processors shown in FIG. 2a.
  • the memory may be the storage device 106 shown in Figure 2a.
  • This embodiment further provides a computer-readable storage medium, where computer instructions are stored in the computer-readable storage medium, and when the computer instructions are executed on the computer, the computer executes the above-mentioned relevant method steps to implement the apparatus 700 in the above-mentioned embodiment. Response action method.
  • This embodiment also provides a computer program product, when the computer program product runs on the computer, the computer executes the above-mentioned relevant steps, so as to realize the response operation method of the apparatus 700 in the above-mentioned embodiment.
  • the computer-readable storage medium or computer program product provided in this embodiment is used to execute the corresponding method provided above. Therefore, for the beneficial effect that can be achieved, reference may be made to the corresponding method provided above. The beneficial effects will not be repeated here.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit.
  • the above-mentioned integrated units may be implemented in the form of hardware, or may be implemented in the form of software functional units.
  • the integrated unit if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a readable storage medium.
  • a readable storage medium includes several instructions to make a device (which may be a single chip microcomputer, a chip, etc.) or a processor (processor) to execute all or part of the steps of the methods in the various embodiments of the present application.
  • the aforementioned readable storage medium includes: U disk, mobile hard disk, read only memory (ROM), random access memory (RAM), magnetic disk or optical disk, etc. that can store program codes. medium.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Computer Hardware Design (AREA)
  • Sustainable Development (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • User Interface Of Digital Computer (AREA)
  • Studio Devices (AREA)

Abstract

本申请实施例提供了一种电子装置和响应操作方法,该电子装置包括:控制器、图像信号处理器、人工智能AI处理器和中央处理器;控制器,用于从环境光传感器获取指示光强变化的第一信号,根据第一信号触发摄像装置采集第一图像;图像信号处理器,用于从摄像装置接收第一图像,处理第一图像以生成第二图像,将第二图像提供给AI处理器;AI处理器,对第二图像进行检测以得到图像检测结果;中央处理器,用于根据图像检测结果执行响应操作,从而可以使得摄像装置受到光强变化触发而工作,节省电子装置的电能,从而在电子装置能够做出响应操作的情况下有利于提高电子装置的续航能力。

Description

电子装置和电子装置的响应操作方法 技术领域
本申请实施例涉及电子科学技术领域,尤其涉及一种电子装置和电子装置的响应操作方法。
背景技术
随着电子科学技术的进步,人工智能(AI,Artificial Intelligence)技术得到飞速发展。AI技术已渗透各个领域,诸如用于生物识别领域、智能机器人领域、医疗领域和自动驾驶领域,使得各领域技术突飞猛进。AI技术中,通常包括生物识别技术、语音识别技术、语音合成技术、和图像处理技术等。
特别是在终端领域,有赖于AI技术,得以为用户提供诸如智能支付、或智能点亮屏幕等服务,点亮屏幕后的终端可以进行进一步针对用户做出其他响应操作。因此亮屏幕服务是一种便于后续响应操作或交互以使得用户可以继续使用终端的业务。应用于终端的智能点亮屏幕服务中,通常包括触摸点亮屏幕和无接触点亮屏幕等。无接触点亮屏幕意思是用户与电子设备不需要接触,即可点亮电子设备的屏幕。当前,点亮屏幕的技术中,为了能够在用户触发屏幕点亮时做到及时准确的点亮屏幕,电子设备通常需要实时运行传感器和检测模型来检测用户动作,在用户动作满足预设条件的时候终端可以点亮屏幕,除了点亮屏幕外,所述预设条件一旦被满足,终端还可触发其他一项或多项针对该用户的响应操作,例如语音的响应。然而,实时运行传感器和检测模型严重提高了电子设备的功耗,降低了电子设备的续航能力。由此,如何在确保终端能够快速做出响应操作的情况下,降低电子设备的功耗成为需要解决的问题。
发明内容
本申请提供的电子装置和电子装置的响应操作方法,可以在有效做出响应操作的情况下降低电子装置的功耗。为达到上述目的,本申请采用如下技术方案。
第一方面,本申请实施例提供一种电子装置,所述电子装置包括:控制器、图像信号处理器、中央处理器、和人工智能AI处理器;所述控制器,用于从环境光传感器获取指示光强变化的第一信号,根据所述第一信号触发摄像装置采集第一图像;所述图像信号处理器,用于从所述摄像装置接收所述第一图像,处理所述第一图像以生成第二图像,将所述第二图像提供给所述AI处理器;所述AI处理器,对所述第二图像进行检测以得到图像检测结果;所述中央处理器,用于根据所述图像检测结果执行响应操作。
控制器基于光强变化或者环境光照强度的变化量信息以触发摄像装置采集图像从而进一步根据图像检测结果触发电子装置执行响应操作,可以使得该摄像装置受到光强变化触发而工作,节省电子装置的电能,从而在电子装置能够做出响应操作的情况下有利于提高电子装置的续航能力。
基于第一方面,在一种可能的实现方式中,所述电子装置还包括:所述环境光传感器, 用于根据所述光强变化,生成所述第一信号。
基于第一方面,在一种可能的实现方式中,所述电子装置还包括:所述摄像装置,用于根据所述控制器的触发采集所述第一图像。
基于第一方面,在一种可能的实现方式中,所述环境光传感器与所述摄像装置集成在一起以形成同一模组。
基于第一方面,在一种可能的实现方式中,所述控制器还用于所述控制器还用于根据所述第一信号触发所述AI处理器对所述第二图像进行检测;以及所述AI处理器,用于根据所述控制器的触发对所述第二图像进行检测以得到所述图像检测结果。
基于第一方面,在一种可能的实现方式中,所述AI处理器,还用于向所述控制器发送完成对所述第二图像的检测的第二信号;所述控制器还用于响应于所述第二信号,控制所述AI处理器进入低功耗状态。
由于AI处理器执行大量的计算工作,其通常具有较大的功耗。该实现方式中,AI处理器被触发而工作,其余时间均处于低功耗状态,可以降低AI处理器的功耗。
基于第一方面,在一种可能的实现方式中,所述图像信号处理器,还用于向所述控制器发送完成对所述第一图像的处理的第三信号;所述控制器还用于响应于所述第三信号,控制所述图像信号处理器进入低功耗状态。
该实现方式中,图像处理器被触发而工作,其余时间均处于低功耗状态,从而可以降低图像处理装置的功耗。
基于第一方面,在一种可能的实现方式中,所述控制器还用于响应于所述第三信号,控制所述摄像装置进入低功耗状态。
基于第一方面,在一种可能的实现方式中,所述低功耗状态包括以下至少一项:待机状态、下电状态或者休眠状态。
基于第一方面,在一种可能的实现方式中,控制所述AI处理器、所述图像信号处理器或者所述摄像装置中的至少一个部件进入低功耗状态包括以下至少一项:关闭相应部件的供电电压、关闭相应部件的时钟、降低相应部件的时钟的频率或者降低相应部件的供电电压。
基于第一方面,在一种可能的实现方式中,所述响应操作包括:控制与所述电子装置对应的屏幕点亮。
基于第一方面,在一种可能的实现方式中,所述人工智能处理器所执行的图像检测包括以下之一:面部图像检测或者手势图像检测。
第二方面,本申请实施例提供一种电子装置的响应操作方法,该响应操作方法包括:利用环境光传感器获取指示光强变化的第一信号;根据所述第一信号触发摄像装置采集第一图像;处理所述第一图像以生成第二图像;对所述第二图像进行检测以得到所述图像检测结果;根据所述图像检测结果执行响应操作。
基于第二方面,在一种可能的实现方式中,所述对所述第二图像进行检测以得到所述图像检测结果包括:根据所述第一信号,触发AI处理器对所述第二图像进行检测以得到所述图像检测结果。
基于第二方面,在一种可能的实现方式中,在所述AI处理器完成对所述第二图像进行检测后,还包括:控制所述AI处理器进入低功耗状态。
基于第二方面,在一种可能的实现方式中,在处理所述第一图像以生成第二图像后,还包括:控制处理所述第一图像的图像信号处理器进入低功耗状态。
基于第二方面,在一种可能的实现方式中,所述响应操作包括:控制与所述电子装置对应的屏幕点亮。
基于第二方面,在一种可能的实现方式中,所述人工智能处理器所执行的图像检测包括以下之一:面部图像检测或者手势图像检测。
第三方面,本申请实施例提供一种芯片组,包括控制器、图像信号处理器、中央处理器、和人工智能AI处理器。所述芯片组包括一个或多个芯片。
第四方面,本申请实施例提供一种装置,所述装置包括:第一信号获取模块,用于利用环境光传感器获取指示光强变化的第一信号;控制模块,用于根据所述第一信号触发摄像装置采集第一图;图像信号处理模块,用于处理所述第一图像以生成第二图像;AI处理模块,用于对所述第二图像进行检测以得到所述图像检测结果;响应操作模块,用于根据所述图像检测结果执行响应操作。
基于第四方面,在一种可能的实现方式中,控制模块进一步用于:根据所述第一信号,触发AI处理器对所述第二图像进行检测以得到所述图像检测结果。
基于第四方面,在一种可能的实现方式中,控制模块进一步用于:在所述AI处理器完成对所述第二图像进行检测后,控制所述AI处理器进入低功耗状态。
基于第四方面,在一种可能的实现方式中,控制模块进一步用于:在处理所述第一图像以生成第二图像后,控制处理所述第一图像的图像信号处理器进入低功耗状态。
基于第四方面,在一种可能的实现方式中,所述响应操作包括:控制与所述装置对应的屏幕点亮。
第五方面,本申请实施例提供一种电子装置,所述电子装置包括存储器和至少一个处理器,所述存储器用于存储计算机程序,所述至少一个处理器被配置用于调用所述存储器存储的全部或部分计算机程序,执行上述第二方面所述的方法。
第六方面,本申请实施例提供一种系统级芯片,所述系统级芯片包括至少一个处理器和接口电路,所述接口电路用于从所述芯片系统外部获取计算机程序;所述计算机程序被所述至少一个处理器执行时用于实现上述第二方面所述的方法。
第七方面,本申请实施例提供一种计算机可读存储介质,所述计算机可读存储介质存储中存储有计算机程序,该计算机程序被至少一个处理器执行时用于实现如第二方面所述的方法。
第八方面,本申请实施例提供一种计算机程序产品,当所述计算机程序产品被至少一个处理器执行时用于实现上述第二方面所述的方法。
应当理解的是,本申请的第二至八方面与本申请的第一方面的技术方案一致,各方面及对应的可行实施方式所取得的有益效果相似,不再赘述。
附图说明
为了更清楚地说明本申请实施例的技术方案,下面将对本申请实施例的描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些 附图获得其他的附图。
图1a-图1c是应用于本申请实施例的应用场景示意图;
图2a是本申请实施例提供的电子装置的一个硬件结构示意图;
图2b是本申请实施例提供的一个系统级芯片中所集成的部件的示意图;
图3是本申请实施例提供的唤醒各部件或指示各部件进入低功耗状态的位图的一个示意图;
图4是本申请实施例提供的各部件之间的交互流程示意图;
图5是本申请实施例提供的应用于AI处理器中的手势检测方法的一个流程图;
图6是本申请实施例提供的应用于AI处理器中的面部检测方法的一个流程图;
图7是本申请实施例提供的电子装置的一个软件结构示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
本文所提及的"第一"、或"第二"以及类似的词语并不表示任何顺序、数量或者重要性,而只是用来区分不同的组成部分。同样,"一个"或者"一"等类似词语也不表示数量限制,而是表示存在至少一个。"耦合"等类似的词语并非限定于物理的或者机械的直接连接,而是可以包括电性的连接,不管是直接的还是间接的,等同于广义上的联通。
在本申请实施例中,“示例性的”或者“例如”等词用于表示作例子、例证或说明。本申请实施例中被描述为“示例性的”或者“例如”的任何实施例或设计方案不应被解释为比其它实施例或设计方案更优选或更具优势。确切而言,使用“示例性的”或者“例如”等词旨在以具体方式呈现相关概念。在本申请实施例的描述中,除非另有说明,“多个”的含义是指两个或两个以上。例如,多个处理器是指两个或两个以上的处理器。
本申请实施例提供的电子装置,可以是个电子设备或集成于电子设备内的模组、芯片、芯片组、电路板或部件。该电子设备可以是一个用户设备(User Equipment,UE),如手机、平板电脑或可穿戴设备(如智能手表)等各种类型的便携式设备。该电子设备可以安装有屏幕和摄像装置。当屏幕处于灭屏状态时,用户通过在摄像装置前方比划预先设置的手势、将手掌伸向摄像装置前方或者将面部对着摄像装置即可唤醒并点亮屏幕,即摄像装置采集的图像被电子设备分析或处理从而触发点亮屏幕。如图1a-图1c所示。在图1a示例性的示出了在屏幕熄灭时,用户通过将手掌伸向摄像装置以唤醒并点亮屏幕的情况;图1b示例性的示出了屏幕熄灭时,用户通过对着摄像装置伸出预设手势以唤醒并点亮屏幕的情况;图1c示例性的示出了屏幕熄灭时,用户通过将面部靠近摄像装置以唤醒并点亮屏幕的情况。需要说明的是,本申请实施例所述的点亮屏幕,在一些场景中可以是指仅点亮屏幕而显示屏幕保护界面,但是未进行解锁以进入主界面中;在另外一些场景中可以是指点亮屏幕并解锁后进入主界面。具体是指哪种场景可以根据用户的选择和实际场景的需要确定。
需要理解,之前介绍是以点亮屏幕作为示例介绍方案,但在实际应用中,通过摄像头采集的图像在经过分析或处理后可以触发电子设备的其他响应操作,例如语音的响应。示 例性地,在检测到预设手势或面部的时候电子设备可以播放预设的一段语音,例如音乐。可以理解,所述响应操作是针对图像检测或分析结果做出的对应操作,可以有多种不同实现方案,本实施例后续仅以屏幕点亮为例做说明,但实施例的可能实现方案不限于此。
基于图1a-图1c所示的应用场景,请参考图2a,其示出了本申请实施例提供的电子装置的一个硬件结构示意图。电子装置100具体可以是芯片或芯片组或搭载有芯片或芯片组的电路板或包括所述电路板的电子设备,具体的电子设备如前面的介绍,此处省略。该芯片或芯片组或搭载有芯片或芯片组的电路板可在必要的软件驱动下工作。后续实施例均以电子装置100就是电子设备本身为例作介绍,但不用于限定方案。如图1a-图1c所示的电子装置100包括一个或多个处理器,例如控制器101、AI处理器102、中央处理器(CPU)107和图像信号处理器104。可选地,所述一个或多个处理器可以集成在一个或多个芯片内,所述一个或多个芯片可以被视为是一个芯片组。图2a中的多个处理器,即控制器101、AI处理器102、中央处理器107和图像信号处理器104均集成在同一个芯片中,所述一个或多个处理器被集成在同一个芯片内时该芯片也叫系统级芯片SOC(System on Chip),如图2b所示,图2b示出了本申请实施例提供的SOC中所集成的部件的示意图。在所述一个或多个处理器之外,电子装置100还包括一个或多个其他必要部件,例如存储设备106、摄像装置105和环境光传感器(Ambient Light Sensor,ALS)103。
针对图2a,控制器101可以是与电子设备中的中央处理器107进行数据通信的专用处理器。具体的,控制器101可以为智能的传感集线器(Sensor Hub),用于采集传感器数据并控制传感器。中央处理器107中可以运行有诸如操作系统软件、和应用软件等必要的软件程序或软件插件。中央处理器107用于执行用户界面(UI)类的屏幕点亮应用从而执行屏幕点亮操作,该屏幕点亮应用可以提供如图1a-图1c所示的各种应用场景的屏幕点亮服务。上述屏幕点亮应用可以在电子设备上电时运行,也可以基于用户的设置运行(例如用户通过启动屏幕点亮应用以指示该屏幕点亮应用运行)。电子设备上电的场景可以包括但不限于:电子设备开机启动,所述开机启动后电子设备可以处于高功耗状态或屏幕点亮状态,可选地,电子设备也可在开机启动后进入待机状态或者低功耗状态。此外,用户还可以通过与电子设备交互以定制该屏幕点亮应用提供的各种个性化服务,例如选择手部或者面部点亮屏幕,点亮屏幕后只显示屏幕保护界面或者直接进入电子设备主界面等。当屏幕点亮应用启动后,控制器101可以处于持续上电状态,即工作状态,以周期性的检测是否有指示光强变化的信号,在检测到指示光强变化的信号时,触发图2a所示的摄像装置105获取第一图像;摄像装置105将获取的第一图像提供给图像信号处理器104;图像信号处理器104对第一图像进行处理生成第二图像提供给AI处理器102;AI处理器102对第二图像进行检测以得到检测结果,然后将检测结果提供给中央处理器107。中央处理器107可以基于AI处理器102的图像检测结果,执行后续点亮屏幕的流程。
本申请实施例中,指示屏幕点亮的信号可以是环境光传感器103基于光强的变化而触发的。通常,环境光传感器103可以用于感应环境光的光强,其通常具有较高的灵敏度,即使微弱的光照强度的变化均可被环境光传感器103感测到。例如,无论在室内还是室外,微动电子设备而引起的光强变化、或者手部或面部伸向电子设备而引起的光强变化均可被环境光传感器103捕捉到。当环境光传感器103检测到光强变化时,将指示光强变化的信号提供给控制器101,以使控制器101触发其他各部件工作以执行屏幕点亮流程。具体的, 环境光传感器103以预设周期确定光照强度值,当检测出相邻两次所确定的光照强度值变化时,向SOC中第一寄存器写入指示光强变化的信息。例如,向第一寄存器写入“1”。此外,环境光传感器103还可以确定出光照强度值的变化量,然后将所确定出的变化量信息写入SOC中第二寄存器。也即是说,可以采用不同的寄存器分别用于记录光照强度是否发生变化和发生变化的变化量。
控制器101可以基于预设周期从第一寄存器或者第二寄存器读取信息,当控制器101从第一寄存器或者第二寄存器读取的信息指示光强变化时,可以对指示光强变化的信号进行处理,以生成控制摄像装置105采集图像的信号,以及将控制摄像装置105采集图像的信号发送至摄像装置105。控制器101生成控制摄像装置105采集图像的信号可以包括两种实现方式。第一种实现方式,控制器101可以周期性的从第一寄存器读取信息。当所读取的信息用于指示光强变化(例如该信息为“1”)时,生成控制摄像装置105采集图像的信号。第二种实现方式,控制器101可以周期性的从第二寄存器读取信息。然后,控制器101将所读取的信息与预设阈值进行比较,当确定出从第二寄存器读取的信息大于或等于预设阈值时,生成控制摄像装置105采集图像的信号。其中,采用哪一种实现方式可以预先通过软件程序写入控制器101。
本申请实施例所示的电子装置中,控制器基于光强变化或者环境光照强度的变化量信息以触发摄像装置从低功耗状态转换为工作状态,从而使得电子装置执行后续图像检测以及进一步根据图像检测结果触发电子装置执行响应操作,可以使得该摄像装置受到光强变化触发而工作,节省电子装置的电能,从而在电子装置能够做出响应操作的情况下有利于提高电子装置的续航能力。需要理解,本申请实施例中的触发一个部件可以是控制或指示该部件开始工作,例如从低功耗状态转换为工作状态,包括但不限于开启该部件的供电电压、开启该部件的时钟、调高该时钟的频率和提高该供电电压等操作。对应地,工作状态变为低功耗状态,可包括但不限于关闭该部件的供电电压、关闭该部件的时钟、降低该时钟的频率和降低该供电电压等操作。
示例性地,控制器101还可以在摄像装置105工作完成时,控制摄像装置105进入低功耗状态。从而,控制器101基于光强变化或者说光照强度的变化量信息以触发摄像装置105采集图像,可以使得摄像装置105不工作时处于低功耗状态,节省电子设备的电能,从而有利于提高电子设备的续航能力。在低功耗状态下,摄像装置105不会采集图像;相应地,在工作状态下摄像装置105正常工作,即采集图像。针对后续实施例中的其他部件,也存在低功耗状态和正常工作的工作状态两种状态。本申请实施例中所述的低功耗状态可以包括但不限于:待机状态、下电状态、或者休眠状态。从低功耗状态转换为工作状态可以理解为是之前实施例提到的触发或唤醒。从工作状态转换为低功耗状态可以理解为是停止正常工作。此外,通过设置环境光传感器103,利用环境光传感器103感测光照强度变化以触发控制器101执行后续屏幕点亮的流程,可以使得用户无需触碰电子设备即可触发后续屏幕点亮的流程,提高用户体验。
控制器101还可以分别与图像信号处理器104和AI处理器102通信,以控制图像信号处理器104进入低功耗状态、唤醒AI处理器102或者控制AI处理器102进入低功耗状态等。如前所述,这里所述的唤醒可以包括但不限于:从下电状态到上电后的工作状态的唤醒,或者从睡眠状态到工作状态的唤醒,或者从待机状态到工作状态。具体实践中,控 制器101可以维护一个位图(bitmap),该bitmap可以存储在寄存器中,bitmap中的每两位代表一种部件。例如,当需要某一部件进入下电状态时,可以将bitmap中相应部件对应的位设置为00,触发该部件下电;当需要某一部件进入工作状态时,可以将bitmap中相应部件对应的位设置为11,触发该部件工作;当需要某一部件进入休眠状态时,可以将bitmap中相应部件对应的位设置为10,触发该部件进入休眠状态。如图3所示,图3示意性的示出了bitmap的示意图。在图3中,从左数第一位和第二位代表AI处理器102。当用代表AI处理器102的位为“00”时,指示AI处理器102下电;当用于代表AI处理器102的位为“11”时,指示AI处理器102上电而进入工作状态。从左数第三位和第四位代表图像信号处理器104。当用于代表图像信号处理器104的位为“00”时,指示图像信号处理器104下电。此外,当本申请实施例所示的电子装置100还包括本申请实施例中所述的摄像装置105等部件时,控制器101还可以与摄像装置105通信,以触发摄像装置105进入工作状态或者进入休眠状态。具体的,在图3中,从左数第五位和第六位可以代表摄像装置105。当用于代表摄像装置105的位为“10”时,指示摄像装置105进入休眠状态。
需要说明的是,本申请实施例中,由于控制器101所能控制的各部件的电源状态包括两种以上(本申请实施例示意性的示出了工作状态、下电状态和休眠状态三种),因此需要两位来指示其中一个部件的电源状态。在其他一些实现方式中,如果控制器101所能控制的各部件的电源模式只包括两种(例如工作状态和下电状态)时,在bitmap中可以用一位指示其中一个部件的电源状态。进一步的,如果控制器101所能控制的各部件的电源状态包括四种以上时,在bitmap中可以用三位指示其中一个部件的电源状态。本申请实施例对此不作限定。
综上可以看出,控制器101由于仅用于检测指示光线变化的信号、基于该指示光线变化的信号控制其他部件进入工作状态、以及在其他部件完成工作时控制其他部件进入低功耗状态等,其不需要大量的计算工作,因此通常具有较小的功耗,即使长期处于工作状态对于电子设备的整体耗电量来说,影响非常小,可以近似忽略。此外,控制器101在各部件需要工作时控制其上电,在各部件工作完毕后控制其下电或者进入低功耗模式,避免各部件持续运行导致电子设备功耗过大,从而可以提高电子设备的续航能力。
图像信号处理器104可以用于对摄像装置105获取的图像进行图像处理。该图像处理具体可以包括但不限于:白平衡矫正、伽马(Gamma)矫正、颜色矫正、镜头矫正或者黑电平补偿等。具体实现中,图像信号处理器104可以从摄像装置105获取图像,然后对所获取到的图像进行处理,将处理后的图像提供至AI处理器102。此外,图像信号处理器104将处理后的图像提供给AI处理器102后,可以向控制器101发送指示图像处理完毕的信号。控制器101在接收到该信号后,可以控制图像信号处理器104进入低功耗状态。也即是说,图像处理器104基于摄像装置105提供的图像醒来以进行图像处理,在图像处理完成后通过控制器101指示其进入低功耗状态,这样一来,图像处理器104仅在工作过程中处于工作状态,其余时间均处于低功耗状态,从而可以进一步降低图像处理器104的功耗,进而降低电子设备的功耗,提高电子设备的续航能力。
AI处理器102可以包括神经网络处理器(Neural-network Processing Unit,NPU)等专用处理器。AI处理器102从控制器101接收到上电指令后,对所获取到的图像进行检测,以确定所获取的图像呈现的对象是否为目标对象。AI处理器102可以将检测结果发送给 中央处理器107。可选地,在接收检测结果之前,中央处理器107可能处于低功耗状态,AI处理器102在发送检测结果前或同时向中央处理器107发送控制信号以唤醒中央处理器107,即控制中央处理器107进入工作状态。中央处理器107可以基于检测结果确定是否点亮屏幕。此外,AI处理器102还可以在图像检测完毕后,向控制器101发送指示图像检测完毕的信号,从而使得控制器101响应于图像检测完毕的信号,控制AI处理器进入低功耗状态。AI处理器102可以运行目标检测模型。该目标检测模型是预先利用训练样本对神经网络(例如标准神经网络或者卷积神经网络等)训练得到的。也即是说,AI处理器102运行时,仅执行目标检测模型的推理过程。
下面通过几种场景对目标检测模型进行详细描述。
在本申请实施例的第一种场景中,用户可以通过手势点亮屏幕。上述目标检测模型可以为手势检测模型。该手势检测模型具体可以包括手部检测模型和手势分类模型。这里,手势例如可以包括但不限于:如图1a所示的伸掌手势、如图1b所示的剪刀手势,或者握拳手势等。以伸掌手势为例,可以利用大量呈现手部的正样本图像和未呈现手部的负样本图像,采用有监督的训练方式,对第一神经网络训练得到手部识别模型,该手部识模型用于检测图像中是否呈现有手部对象以及手部对象在图像中的坐标区域;可以利用大量呈现伸掌手势的正样本图像和呈现其他手势的负样本图像,采用有监督训练方式对第二神经网络训练得到手势分类模型。具体应用中,摄像装置105获取到的图像首先传输至手部检测模型,以检测图像中是否呈现有手部对象。当手部检测模型检测出图像中呈现有手部对象时,可以输出指示图像中呈现手部对象以及手部对象在图像中的位置区域的信息。AI处理器可以基于输出结果,将手部图像在图像中裁剪出来然后输入至手势分类模型,手势分类模型用于检测该手部图像呈现的手势是否为伸掌手势,基于检测结果,输出用于指示是否为伸掌手势的信息。该场景中,用户通常无法自定义手势,其仅可以通过手势检测模型所能检测出的手势点亮屏幕。此外,在该场景中,由于无法检测是否为机主,为了提高电子设备的安全性,保护机主的隐私,该场景中点亮屏幕可以为唤醒屏幕以显示屏幕保护界面,但是未进行解锁以进入主界面中。
在本申请实施例的第二种场景中,用户可以通过面部点亮屏幕。此时,目标检测模型可以为面部检测模型。该面部检测模型可以用于检测图像中是否呈现有面部对象、所呈现的面部对象的位姿信息、以及呈现有多少个面部对象等。具体实现中,可以利用大量呈现面部的样本图像和未呈现面部的样本图像,采用有监督训练方式对神经网络训练得到面部检测模型。其中,上述呈现面部的样本图像还可以包括但不限于:呈现有一个面部的样本图像、呈现有多个面部的样本图像和呈现各种面部位姿形态的样本图像。在该场景中,由于面部检测模型是利用多个用户的面部图像作为正样本图像训练得到的,其通常只能检测图像中是否呈现有面部图像,无法检测图像中呈现的对象是否为机主,为了提高电子设备的安全性,保护机主的隐私,该场景中点亮屏幕可以为唤醒屏幕以显示屏幕保护界面,但是未进行解锁以进入主界面。
在一种可能的实现方式中,AI处理器102中可以设置多种目标检测模型,例如其中一种用于检测手势,另外一种用于检测面部,从而可以供用户选择采用哪一种方式进行屏幕点亮。但是运行时,其可以运行一种目标检测模型。当用户采用手势点亮屏幕时,AI处理器102运行手势检测模型;当用户采用面部点亮屏幕时,AI处理器102运行面部检测 模型。
为了进一步提高用户体验,可以点亮屏幕并解锁后进入主界面。由于需要对屏幕解锁,考虑到安全性,通常需要对设备使用者的身份进行验证。此时,AI处理器102还可以进行特征提取和比对。
在手势点亮屏幕的场景中,可以采用指纹或者掌纹比对的方法进行机主身份验证。具体的,当用户选择手势点亮屏幕并解锁进入主界面的服务时,屏幕点亮应用可以驱动中央处理器通过摄像装置或者红外传感器装置采集机主的指纹信息或者掌纹信息作为后续比对的模板。当用户触发通过指纹或者掌纹点亮并解锁屏幕时,AI处理器102可以首先利用手势检测模型检测图像中是否呈现有预定手势。假设该预定手势为伸掌手势,当检测出图像中呈现有伸掌手势时,可以进一步将所采集的图像中呈现的掌纹与预先存储的掌纹信息,即模板进行比对,确定二者是否匹配。
在面部点亮屏幕的场景中,可以采用人脸比对的方法进行机主身份验证。具体的,当用户选择面部点亮屏幕并解锁进入主界面的服务时,屏幕点亮应用可以驱动中央处理器获取摄像装置或者传感器装置采集的机主的面部信息作为后续比对的模板。当用户触发通过面部点亮并解锁屏幕时,AI处理器102可以首先利用面部检测模型检测图像中是否呈现有面部对象、呈现有几个面部对象以及面部对象在图像中的位姿信息。当检测出图像中呈现有面部对象时,可以进一步将所采集的图像中呈现的面部对象与预先存储的面部信息,即模板进行比对,确定二者是否匹配。
在一种可能的实现方式中,在上述手势点亮屏幕的场景中,用于点亮屏幕的手势也可以是用户自定义的。具体的,当用户采用自定义的方式设置手势点亮屏幕时,屏幕点亮应用可以驱动中央处理器107通过摄像装置105采集用户自定义手势的手势图像作为后续比对模板。当用户触发点亮屏幕时,AI处理器102可以将获取的图像中呈现的对象与预先存储的手势图像,即模板进行比对,确定二者是否匹配。
可选的,控制器101从环境光传感器103接收到指示光强变化的信号后,可以生成触发摄像装置105采集图像的信号提供给摄像装置105。摄像装置105从控制器101接收到信号后,可以通过拍摄获取图像。然后,将获取的图像提供给图像信号处理器104。控制器101从图像信号处理器104接收到指示图像处理完毕的信号时,还可以控制摄像装置105进入低功耗状态。
在一种可能的实现方式中,为了节省版图面积,环境光传感器103可以设置于摄像装置105内部,或者说将二者做集成以成为一个模组,即环境光传感器103可以选择性设置于上述摄像装置105内,本实施例对此不限定。
本申请实施例中,存储设备106可以包括随机存取存储器(RAM)。该随机存取存储器可以包括易失性存储器(如SRAM、DRAM、DDR(双倍数据速率SDRAM,Double Data Rate SDRAM)或SDRAM等)和非易失性存储器。RAM中可以存储有AI处理器102运行所需要的数据(诸如预先保存的用户面部信息、用户手势图像和用户掌纹信息等)和参数、AI处理器102运行所产生的中间数据、AI处理器102运行后的输出结果等。此外,图像信号处理器104还可以将处理后的图像存储在RAM中。AI处理器102运行时,可以从RAM获取处理后的图像、预先保存的用户面部信息、用户手势图像和用户掌纹信息等。中央处理器107可以从RAM中获取AI处理器102的输出结果。此外,存储设备106还 可以包括只读存储器ROM。只读存储器ROM中可以存储有控制器101、AI处理器102、图像信号处理器104和中央处理器107的可执行程序。上述各部件可以通过加载可执行程序以执行各自的工作。
可选的,存储设备106包括不同类型,可以集成于上述第一半导体芯片SOC内,也可以集成于电子装置100中不同于第一半导体芯片SOC的第二半导体芯片内。
在本实施例中,电子装置100还可以包括通信单元(图中未示出),该通信单元包括但不限于近场通信单元、或移动通信单元。其中,近场通信单元通过运行短距离无线通信协议与位于移动终端外的用于接入互联网的终端之间进行信息交互。该短距离无线通信协议可以包括但不限于:射频识别技术支持的各种协议、蓝牙通信技术协议、或红外通信协议等。移动通信单元通过运行蜂窝无线通信协议与无线接入网接入互联网,以实现移动通信单元与互联网中对各种应用进行支持的服务器进行信息交互。该通信单元可以集成于上述第一半导体芯片内。此外,电子装置100还可选择性地包括总线、输入/输出端口I/O、或存储控制器等。存储控制器用于控制存储设备106。其中,总线、输入/输出端口I/O、和存储控制器等均可以与上述控制器101和AI处理器102等集成于上述第一半导体芯片内。应理解,在实际应用中,电子装置100可以包括比图2a所示的更多或更少的部件,本申请实施例不作限定。
请继续参考图4,其示出了本申请实施例提供的屏幕点亮方法400的一个示意性时序图。该屏幕点亮方法应用于图2a所示的电子装置100。该屏幕点亮方法400可以包括如下步骤:步骤401,环境光传感器103感测光照强度是否变化。当感测到光照强度变化时,基于光强变化,生成第一信号。步骤402,环境光传感器103将第一信号提供给控制器101。
步骤403,控制器101检测是否从环境光传感器103获取到第一信号。当控制器101检测到第一信号时,向摄像装置105发送触发摄像装置105采集图像的第二信号。当控制器101在没有检测到第一信号时,可以持续检测,直到检测到第一信号。步骤404,摄像装置105响应于第二信号,采集第一图像。步骤405,摄像装置105将所采集的第一图像提供给图像信号处理器104。需要说明的是,摄像装置105所采集的图像可以包括多帧。
步骤406,图像信号处理器104对摄像装置105提供的第一图像进行处理,生成第二图像。步骤407,图像信号处理器104将第二图像存储至存储设备106中。步骤408,图像信号处理器104在图像处理完成后,向控制器101发送指示图像处理完成的信号。
步骤409,控制器101响应于图像信号处理器104发送的用于指示图像信号处理器104图像处理完成的信号,向图像信号处理器104发送指示图像信号处理器104进入低功耗状态的信号。步骤410,控制器101向AI处理器102发送指示AI处理器102执行图像检测的信号。步骤411,控制器101向摄像装置105发送指示摄像装置105进入低功耗状态的信号。
步骤412,AI处理器102响应于控制器101发送的指示执行图像检测的信号,从存储设备106获取图像信号处理器104存储的第二图像。步骤413,AI处理器102对第二图像进行检测。步骤414,AI处理器102将检测结果存储至存储设备106中。步骤415,AI处理器102向控制器101发送图像检测完毕的信号。
步骤416,控制器101响应于指示图像检测完毕的信号,向AI处理器102发送指示AI处理器102进入低功耗状态的信号。
步骤417,中央处理器107从存储设备106获取图像检测结果。步骤418,中央处理器107基于图像检测结果,控制屏幕点亮或者保持灭屏状态。
需要说明的是,本申请实施例所示的各步骤中,各部件的具体工作方式可以参考图2a所示的实施例中与各部件相关的具体描述,在此不再赘述。
应理解,图4所示的屏幕点亮方法400的步骤或操作仅是示例,本申请实施例还可以执行其他操作或者图4中的各个操作的变形。此外,本申请对各步骤的顺序不做限定。例如,在图4所示的各步骤中,步骤409、步骤410和步骤411可以同时执行;步骤414和步骤415可以同时执行。本申请实施例还可以包括比图4所示的步骤更多或更少的步骤。例如,当中央处理器107和AI处理器102中均设置有寄存器等临时存储装置时,可以不需要设置步骤407、步骤412、步骤414和步骤417,图像信号处理器104可以直接将第二图像发送给AI处理器102,AI处理器102可以直接将检测结果发送给中央处理器107。再例如,控制器101将步骤409中所示的指示图像信号处理器104进入低功耗状态的信号发送给图像信号处理器104后,图像信号处理器104基于该进入低功耗状态的信号可以进入低功耗状态。图4中省略了图像信号处理器104进入低功耗状态的步骤。
基于图4所示的屏幕点亮方法400的时序,下面通过具体场景对AI处理器102的具体处理过程以及中央处理器107基于AI处理器102的检测结果所执行的后续流程进行详细描述。
以图1a所示的伸掌手势点亮屏幕的场景为例,对AI处理器102的检测流程进行描述。请参考图5,图5示意性的示出了采用伸掌手势点亮屏幕的手势检测方法500。该手势检测方法500的检测步骤包括:步骤501,获取图像流。上述图像流可以是摄像装置105将所拍摄的图像流提供给图像信号处理器104,经图像信号处理器104处理后、从图像信号处理器104获取的,也可以是图像信号处理器104将图像流存储在存储设备106中,AI处理器102可以从存储设备106中获取该图像流。
步骤502,将所获取的图像流中的每一帧图像逐一输入至手势检测模型,得到用于指示每一帧图像中是否呈现有伸掌手势的检测结果。该手势检测模型的具体训练方法可参考图2a所示的电子装置100中有关手势检测模型的相关描述,在此不再赘述。
步骤503,将所得到的用于指示每一帧图像中是否呈现有伸掌手势的检测结果提供给中央处理器107。从而,中央处理器107可以基于所接收到的手势检测结果,确定是否点亮屏幕。具体的,中央处理器107中预先设置有指示是否点亮屏幕的判决条件。例如,图像中呈现有伸掌手势,且连续呈现有伸掌手势的图像的数目大于等于预设数目(例如三帧)。当AI处理器102提供的检测结果用于指示图像中未呈现有伸掌手势、或者连续呈现有伸掌手势的图像的数目小于预设数目时,可以指示保持屏幕的熄灭状态;当AI处理器102提供的检测结果用于指示连续呈现有伸掌手势的图像的数目大于等于预设数目时,可以指示点亮屏幕以呈现屏幕保护界面。
需要说明的是,通过检测连续呈现有伸掌手势的图像的数目,可以判断出用户的手掌是否处于悬停状态。在某些场景中,用户的手掌仅仅在摄像装置105前晃动了一下,此时用户可能并未想要点亮屏幕,而伸掌手势恰巧被摄像装置105采集到并将呈现伸掌手势的图像提供给AI处理器102,从而触发中央处理器107进一步点亮屏幕,降低用户体验。通过判断连续的多帧图像是否呈现有伸掌手势,可以降低上述场景发生的概率,从而可以 提供点亮屏幕的最佳时机,有利于提高用户体验。
在某些场景中,AI处理器102还可以执行步骤504。步骤504,从至少一帧呈现有伸掌手势的图像中选择一帧,将图像中呈现的伸掌手势的掌纹与预先存储的掌纹信息进行比对,将比对结果提供给中央处理器107。
此外,中央处理器107基于AI处理器102提供的掌纹对比结果确定是否需要进一步进行屏幕解锁以进入主界面。当掌纹比对结果用于指示二者之间的相似度值大于或等于预设阈值时,可以指示进一步进行屏幕解锁以呈现主界面;当掌纹比对结果用于指示二者之间的相似度值小于预设阈值时,可以指示禁止屏幕解锁。
下面,以图1c所示的面部点亮屏幕的场景为例,对AI处理器102的检测流程进行描述。请参考图6,图6示意性的示出了采用面部点亮屏幕的面部检测方法600。该面部检测方法600的检测步骤包括:步骤601,获取图像流。其中,图像流的获取方式可以参考图5所示的步骤501的获取方式,在此不再赘述。
步骤602,将所获取的图像流中的每一帧图像逐一输入至面部检测模型,得到与每一帧图像对应的面部检测结果。该面部检测结果包括以下至少一项:图像中是否呈现有面部图像、呈现有几个面部图像和所呈现的面部图像的位姿信息。该面部检测模型的具体训练方法可参考图2a所示的电子装置100中有关面部检测模型的相关描述,在此不再赘述。
步骤603,将所得到的面部检测结果提供给中央处理器107。从而,中央处理器107可以基于所接收到的面部检测结果,确定是否点亮屏幕。具体的,中央处理器107中可以预先设置有指示是否点亮屏幕的判决条件。例如,图像中呈现有面部图像,所呈现的面部图像的位姿为预设角度范围,并且连续呈现有面部对象的图像的数目大于等于预设数目(例如三帧)。当AI处理器102提供的面部检测结果用于指示图像中未呈现有面部图像、或者连续呈现有面部对象的图像的数目小于预设数目时,中央处理器107可以指示保持屏幕熄灭的状态;当AI处理器102提供的面部检测结果用于指示连续呈现有面部对象的图像的数目大于等于预设数目、并且所呈现的面部图像的位姿位于预设角度范围内,可以指示点亮屏幕以呈现屏幕保护界面。
在某些场景中,AI处理器102还可以执行步骤604。步骤604,从至少一帧呈现有面部对象的图像中选择一帧,将图像中呈现的面部对象与预先存储的面部信息进行比对,将比对结果提供给中央处理器107。
此外,中央处理器107还可以基于AI处理器102发送的面部对比结果,确定是否需要进一步进行屏幕解锁以在屏幕中呈现主界面。当面部比对结果用于指示二者之间的相似度值大于或等于预设阈值时,中央处理器107可以指示进一步进行屏幕解锁以呈现主界面;当面部比对结果用于指示二者之间的相似度值小于预设阈值时,中央处理器107可以指示禁止屏幕解锁。
需要说明的是,通过检测连续呈现有面部轮廓的图像的数目,可以判断出用户的面部是否处于悬停状态。在某些场景中,用户的面部仅仅在摄像装置105前晃动了一下,此时用户可能并未想要点亮屏幕,而用户面部恰巧被摄像装置105采集到并将呈现面部对象的图像提供给AI处理器,从而触发中央处理器107进一步点亮屏幕,降低用户体验。通过判断连续的多帧图像是否呈现有面部对象,可以降低上述场景发生的概率,从而可以提供点亮屏幕的最佳时机,有利于提高用户体验。
可以理解的是,电子装置为了实现上述功能,其包含了执行各个功能相应的硬件和/或软件模块。结合本文中所公开的实施例描述的各示例的算法步骤,本申请能够以硬件或硬件和计算机软件的结合形式来实现。某个功能究竟以硬件还是计算机软件驱动硬件的方式来执行,取决于技术方案的特定应用和设计约束条件。本领域技术人员可以结合实施例对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
本实施例可以根据上述方法示例对以上一个或多个处理器进行功能模块的划分,例如,可以对应各个功能划分各个功能模块,也可以将两个或两个以上的功能集成在一个处理模块中。上述集成的模块可以采用硬件的形式实现。需要说明的是,本实施例中对模块的划分是示意性的,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。
在采用对应各个功能划分各个功能模块的情况下,图7示出了上述实施例中涉及的装置700的一种可能的组成示意图,如图7所示,该装置700可以包括:第一信号获取模块701、控制模块702、图像信号处理模块703、AI处理模块704和响应操作模块705。
其中,第一信号获取模块701,用于利用环境光传感器获取指示光强变化的第一信号;控制模块702,用于根据所述第一信号触发摄像装置采集第一图;图像信号处理模块703,用于处理所述第一图像以生成第二图像;AI处理模块704,用于对所述第二图像进行检测以得到所述图像检测结果;响应操作模块705,用于根据所述图像检测结果执行响应操作。
在一种可能的实现方式中,控制模块702进一步用于:根据所述第一信号,触发AI处理器对所述第二图像进行检测以得到所述图像检测结果。
在一种可能的实现方式中,控制模块702进一步用于:在所述AI处理器完成对所述第二图像进行检测后,控制所述AI处理器进入低功耗状态。
在一种可能的实现方式中,控制模块702进一步用于:在处理所述第一图像以生成第二图像后,控制处理所述第一图像的图像信号处理器进入低功耗状态。
在一种可能的实现方式中,所述响应操作包括:控制与所述装置700对应的屏幕点亮。
本实施例提供的装置700,用于执行电子装置100所执行的响应操作方法,可以达到与上述实现方法相同的效果。以上图7对应的各个模块可以软件、硬件或二者结合实现、例如,每个模块可以以软件形式实现,对应于图2b中与该模块对应的一个处理器,用于驱动对应处理器工作。或者,每个模块可包括对应处理器和相应的驱动软件两部分。
在采用集成的单元的情况下,装置700可以包括至少一个处理器和存储器,具体参考图2b。其中,至少一个处理器可以调用存储器存储的全部或部分计算机程序,对电子装置100的动作进行控制管理,例如,可以用于支持电子装置100执行上述各个模块执行的步骤。存储器可以用于支持电子装置100执行存储程序代码和数据等。处理器可以实现或执行结合本申请公开内容所描述的各种示例性的逻辑模块,其可以是实现计算功能的一个或多个微处理器组合,例如包括但不限于图2a所示的控制器101、图像信号处理器104、AI处理器和中央处理器107。此外,处理器除了包括图2a所示的各处理器外,还可以包括其他可编程逻辑器件、晶体管逻辑器件、或者分立硬件组件等。存储器可以为图2a所示的存储设备106。
本实施例还提供一种计算机可读存储介质,该计算机可读存储介质中存储有计算机指令,当该计算机指令在计算机上运行时,使得计算机执行上述相关方法步骤实现上述实施 例中的装置700的响应操作方法。
本实施例还提供了一种计算机程序产品,当该计算机程序产品在计算机上运行时,使得计算机执行上述相关步骤,以实现上述实施例中装置700的响应操作方法。
其中,本实施例提供的计算机可读存储介质或者计算机程序产品均用于执行上文所提供的对应的方法,因此,其所能达到的有益效果可参考上文所提供的对应的方法中的有益效果,此处不再赘述。
通过以上实施方式的描述,所属领域的技术人员可以了解到,为描述的方便和简洁,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将装置的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个可读取存储介质中。基于这样的理解,本申请实施例的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该软件产品存储在一个存储介质中,包括若干指令用以使得一个设备(可以是单片机,芯片等)或处理器(processor)执行本申请各个实施例方法的全部或部分步骤。而前述的可读存储介质包括:U盘、移动硬盘、只读存储器(read only memory,ROM)、随机存取存储器(random access memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。

Claims (17)

  1. 一种电子装置,其特征在于,所述电子装置包括控制器、图像信号处理器、中央处理器、和人工智能AI处理器;
    所述控制器,用于从环境光传感器获取指示光强变化的第一信号,根据所述第一信号触发摄像装置采集第一图像;
    所述图像信号处理器,用于从所述摄像装置接收所述第一图像,处理所述第一图像以生成第二图像,将所述第二图像提供给所述AI处理器;
    所述AI处理器,对所述第二图像进行检测以得到图像检测结果;
    所述中央处理器,用于根据所述图像检测结果执行响应操作。
  2. 根据权利要求1所述的电子装置,其特征在于,所述电子装置还包括:
    所述环境光传感器,用于根据所述光强变化,生成所述第一信号。
  3. 根据权利要求1或2所述的电子装置,其特征在于,所述电子装置还包括:
    所述摄像装置,用于根据所述控制器的触发采集所述第一图像。
  4. 根据权利要求1-3任一项所述的电子装置,其特征在于,所述控制器还用于根据所述第一信号触发所述AI处理器对所述第二图像进行检测;以及
    所述AI处理器,用于根据所述控制器的触发对所述第二图像进行检测以得到所述图像检测结果。
  5. 根据权利要求4所述的装置,其特征在于,所述AI处理器,还用于向所述控制器发送完成对所述第二图像的检测的第二信号;
    所述控制器还用于响应于所述第二信号,控制所述AI处理器进入低功耗状态。
  6. 根据权利要求1-5任一项所述的装置,其特征在于,所述图像信号处理器,还用于向所述控制器发送完成对所述第一图像的处理的第三信号;
    所述控制器还用于响应于所述第三信号,控制所述图像信号处理器进入低功耗状态。
  7. 根据权利要求6所述的装置,其特征在于,
    所述控制器还用于响应于所述第三信号,控制所述摄像装置进入低功耗状态。
  8. 根据权利要求1-7任一项所述的装置,其特征在于,所述响应操作包括:控制与所述电子装置对应的屏幕点亮。
  9. 一种电子装置的响应操作方法,其特征在于,所述方法包括:
    利用环境光传感器获取指示光强变化的第一信号;
    根据所述第一信号触发摄像装置采集第一图像;
    处理所述第一图像以生成第二图像;
    对所述第二图像进行检测以得到所述图像检测结果;
    根据所述图像检测结果执行响应操作。
  10. 根据权利要求9所述的方法,其特征在于,所述对所述第二图像进行检测以得到所述图像检测结果包括:
    根据所述第一信号,触发AI处理器对所述第二图像进行检测以得到所述图像检测结果。
  11. 根据权利要求10所述的方法,其特征在于,在所述AI处理器完成对所述第二图 像进行检测后,还包括:控制所述AI处理器进入低功耗状态。
  12. 根据权利要求9-11任一项所述的方法,其特征在于,在处理所述第一图像以生成第二图像后,还包括:控制处理所述第一图像的图像信号处理器进入低功耗状态。
  13. 根据权利要求9-12任一项所述的方法,其特征在于,所述响应操作包括:控制与所述电子装置对应的屏幕点亮。
  14. 一种电子装置,其特征在于,包括存储器和至少一个处理器,所述存储器用于存储计算机程序,所述至少一个处理器被配置用于调用所述存储器存储的全部或部分计算机程序,执行如权利要求9-13任一项所述的方法。
  15. 一种系统级芯片,其特征在于,所述系统级芯片包括至少一个处理器和接口电路,所述接口电路用于从所述芯片系统外部获取计算机程序;所述计算机程序被所述至少一个处理器执行时用于实现如权利要求9-13任一所述的方法。
  16. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质中存储有计算机程序,该计算机程序被至少一个处理器执行时用于实现如权利要求9-13任一项所述的方法。
  17. 一种计算机程序产品,其特征在于,当所述计算机程序产品被至少一个处理器执行时用于实现如权利要求9-13任一项所述的方法。
PCT/CN2020/112588 2020-08-31 2020-08-31 电子装置和电子装置的响应操作方法 WO2022041220A1 (zh)

Priority Applications (4)

Application Number Priority Date Filing Date Title
EP20950889.4A EP4195628A4 (en) 2020-08-31 2020-08-31 ELECTRONIC DEVICE AND ELECTRONIC DEVICE RESPONSE OPERATING METHOD
PCT/CN2020/112588 WO2022041220A1 (zh) 2020-08-31 2020-08-31 电子装置和电子装置的响应操作方法
CN202080011167.1A CN114531947A (zh) 2020-08-31 2020-08-31 电子装置和电子装置的响应操作方法
US18/176,261 US20230205296A1 (en) 2020-08-31 2023-02-28 Electronic apparatus and response operation method for electronic apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/112588 WO2022041220A1 (zh) 2020-08-31 2020-08-31 电子装置和电子装置的响应操作方法

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/176,261 Continuation US20230205296A1 (en) 2020-08-31 2023-02-28 Electronic apparatus and response operation method for electronic apparatus

Publications (1)

Publication Number Publication Date
WO2022041220A1 true WO2022041220A1 (zh) 2022-03-03

Family

ID=80354306

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/112588 WO2022041220A1 (zh) 2020-08-31 2020-08-31 电子装置和电子装置的响应操作方法

Country Status (4)

Country Link
US (1) US20230205296A1 (zh)
EP (1) EP4195628A4 (zh)
CN (1) CN114531947A (zh)
WO (1) WO2022041220A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114727082A (zh) * 2022-03-10 2022-07-08 杭州中天微系统有限公司 图像处理装置、图像信号处理器、图像处理方法和介质

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118190152A (zh) * 2022-12-14 2024-06-14 荣耀终端有限公司 一种环境光传感器数据获取方法、装置和电子设备

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140118246A1 (en) * 2012-11-01 2014-05-01 Pantech Co., Ltd. Gesture recognition using an electronic device including a photo sensor
CN106982273A (zh) * 2017-03-31 2017-07-25 努比亚技术有限公司 移动终端及其控制方法
CN108803896A (zh) * 2018-05-28 2018-11-13 Oppo(重庆)智能科技有限公司 控制屏幕的方法、装置、终端及存储介质

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104102863B (zh) * 2014-07-24 2017-07-18 北京握奇智能科技有限公司 一种身份认证设备及该设备控制方法
CN106453930B (zh) * 2016-10-26 2020-07-10 惠州Tcl移动通信有限公司 一种点亮移动终端屏幕的方法及移动终端
CN107807778A (zh) * 2017-11-07 2018-03-16 深圳创维-Rgb电子有限公司 一种显示系统及显示系统的控制方法
CN109951595A (zh) * 2017-12-20 2019-06-28 广东欧珀移动通信有限公司 智能调节屏幕亮度的方法、装置、存储介质及移动终端
US11012603B2 (en) * 2018-06-08 2021-05-18 Samsung Electronics Co., Ltd Methods and apparatus for capturing media using plurality of cameras in electronic device
CN109079809B (zh) * 2018-07-27 2022-02-01 平安科技(深圳)有限公司 一种机器人屏幕解锁方法、装置、智能设备及存储介质
CN109389731A (zh) * 2018-12-29 2019-02-26 武汉虹识技术有限公司 一种可视化操作的虹膜锁
CN110058777B (zh) * 2019-03-13 2022-03-29 华为技术有限公司 快捷功能启动的方法及电子设备
CN110298161A (zh) * 2019-06-28 2019-10-01 联想(北京)有限公司 应用于电子设备的身份认证方法和电子设备
CN111103963A (zh) * 2019-12-10 2020-05-05 惠州Tcl移动通信有限公司 指纹模块启动方法、装置、存储介质及终端

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140118246A1 (en) * 2012-11-01 2014-05-01 Pantech Co., Ltd. Gesture recognition using an electronic device including a photo sensor
CN106982273A (zh) * 2017-03-31 2017-07-25 努比亚技术有限公司 移动终端及其控制方法
CN108803896A (zh) * 2018-05-28 2018-11-13 Oppo(重庆)智能科技有限公司 控制屏幕的方法、装置、终端及存储介质

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114727082A (zh) * 2022-03-10 2022-07-08 杭州中天微系统有限公司 图像处理装置、图像信号处理器、图像处理方法和介质
CN114727082B (zh) * 2022-03-10 2024-01-30 杭州中天微系统有限公司 图像处理装置、图像信号处理器、图像处理方法和介质

Also Published As

Publication number Publication date
EP4195628A4 (en) 2023-10-11
EP4195628A1 (en) 2023-06-14
US20230205296A1 (en) 2023-06-29
CN114531947A (zh) 2022-05-24

Similar Documents

Publication Publication Date Title
US11068712B2 (en) Low-power iris scan initialization
TWI770539B (zh) 控制電路及其控制顯示面板的方法
US10627887B2 (en) Face detection circuit
US10515284B2 (en) Single-processor computer vision hardware control and application execution
US9986211B2 (en) Low-power always-on face detection, tracking, recognition and/or analysis using events-based vision sensor
US20230205296A1 (en) Electronic apparatus and response operation method for electronic apparatus
KR102255215B1 (ko) 이미지에서 객체를 검출하는 객체 검출 방법 및 이미지 처리 장치
US9176608B1 (en) Camera based sensor for motion detection
US20150015688A1 (en) Facial unlock mechanism using light level determining module
US20090207121A1 (en) Portable electronic device automatically controlling back light unit thereof and method for the same
US10009496B2 (en) Information processing apparatus and method for controlling the same
US11100891B2 (en) Electronic device using under-display fingerprint identification technology and waking method thereof
CN113490943B (zh) 一种集成芯片以及处理传感器数据的方法
US20200167456A1 (en) Device and control method for biometric authentication
CN214896630U (zh) 指纹感测装置
WO2020253495A1 (zh) 一种屏幕锁定的控制方法、装置、手持终端以及存储介质
TW201741927A (zh) 解鎖系統及方法
KR20060131544A (ko) 지문인식 입력장치를 구비하는 휴대용 정보 단말기 및그의 제어방법
US11729497B2 (en) Processing circuitry for object detection in standby mode, electronic device, and operating method thereof
AU2018210818B2 (en) Single-processor computer vision hardware control and application execution
TWI777141B (zh) 人臉辨識方法以及人臉辨識裝置
WO2022057093A1 (zh) 可穿戴设备及其屏幕唤醒方法、可读存储介质
US20240062399A1 (en) Assistive depth of field analysis
TWI699710B (zh) 可於休眠狀態快速解鎖指紋的方法及資訊處理裝置
TW201702797A (zh) 電子裝置及其啟動方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20950889

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2020950889

Country of ref document: EP

Effective date: 20230306

NENP Non-entry into the national phase

Ref country code: DE