WO2020207328A1 - 图像识别方法和电子设备 - Google Patents

图像识别方法和电子设备 Download PDF

Info

Publication number
WO2020207328A1
WO2020207328A1 PCT/CN2020/083033 CN2020083033W WO2020207328A1 WO 2020207328 A1 WO2020207328 A1 WO 2020207328A1 CN 2020083033 W CN2020083033 W CN 2020083033W WO 2020207328 A1 WO2020207328 A1 WO 2020207328A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
electronic device
camera
color depth
sensor
Prior art date
Application number
PCT/CN2020/083033
Other languages
English (en)
French (fr)
Inventor
王骅
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2020207328A1 publication Critical patent/WO2020207328A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/16Constructional details or arrangements
    • G06F1/1613Constructional details or arrangements for portable computers
    • G06F1/1633Constructional details or arrangements of portable computers not specific to the type of enclosures covered by groups G06F1/1615 - G06F1/1626
    • G06F1/1684Constructional details or arrangements related to integrated I/O peripherals not covered by groups G06F1/1635 - G06F1/1675
    • G06F1/1686Constructional details or arrangements related to integrated I/O peripherals not covered by groups G06F1/1635 - G06F1/1675 the I/O peripheral being an integrated camera
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/70Protecting specific internal or peripheral components, in which the protection of a component leads to protection of the entire computer
    • G06F21/82Protecting input, output or interconnection devices
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/70Protecting specific internal or peripheral components, in which the protection of a component leads to protection of the entire computer
    • G06F21/82Protecting input, output or interconnection devices
    • G06F21/83Protecting input, output or interconnection devices input devices, e.g. keyboards, mice or controllers thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • G06V10/14Optical characteristics of the device performing the acquisition or on the illumination arrangements
    • G06V10/147Details of sensors, e.g. sensor lenses
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/70SSIS architectures; Circuits associated therewith

Definitions

  • This application relates to the field of electronic technology, in particular to an image recognition method and electronic equipment.
  • a conventional camera can be set on an electronic device to collect image data for environmental detection, face detection, face recognition, etc.
  • This application discloses an image recognition method and electronic equipment, which can improve the security of user information.
  • an embodiment of the present application provides an image recognition method.
  • the method includes: an electronic device calls a first camera to collect first image data; the first camera includes a first sensor, and the first sensor includes a first sensor.
  • An image-sensitive unit array the number of image-sensitive units in the first image-sensitive unit array is less than or equal to 40,000, the color depth of the image data output by the first sensor is a first color depth, and the first color depth is less than 8 bits.
  • the first camera has a low color depth and the captured image has a low resolution, and carries less information. Therefore, the damage to the security and privacy of the user information caused by the leakage of the rich information of the image captured by the first camera is reduced, and the security of the user information is improved.
  • the resolution of the image captured by the camera is determined by the number of image sensitive units in the first image sensitive unit array in the first camera.
  • the image-sensitive unit array is an array formed by arranging a plurality of image-sensitive units, such as uniformly arranged in a rectangular shape.
  • the number of image-sensitive units in the horizontal and vertical directions of the image-sensitive unit array is 128 and 96 respectively, that is, the image-sensitive unit array If it is a 128 ⁇ 96 array, the resolution of the image taken by the first camera is 128 ⁇ 96.
  • the array of image sensitive units can be arranged in a circular array, and the number of image sensitive units on the diameter of the circular array is less than or equal to 200, for example, the number of image sensitive units on the diameter is 128.
  • the color depth of the image data output by the first sensor is 4 bits, and each pixel in the first sensor can describe 16 levels of white or one of the three primary colors of red, green, and blue.
  • the first camera is always turned on when the electronic device is working.
  • the always-on first camera has a low color depth and the captured image has a low resolution, carrying less information, and improving the security of user information.
  • the method further includes: the electronic device performs image recognition on the first image data, and according to the recognition result and The current state, perform related operations.
  • the first color depth is 4 bits
  • the maximum number of the image sensitive units in the horizontal and vertical directions of the image sensitive unit array is 128 and 96, respectively, or the image sensitive unit
  • the maximum number of the image sensitive units in the horizontal and vertical directions of the array is 144 and 96, respectively.
  • the recognition result is that a face image is detected, and the current state is that the electronic device is in a locked state; the electronic device performs related operations, including: the electronic device calls a second camera Unlock the face.
  • the second camera includes a second sensor, the second sensor includes a second image-sensitive unit array, and the number of the image-sensitive units in the second image-sensitive unit array is greater than that of the first image-sensitive unit array In the number of the image sensitive units, the color depth of the image data output by the second sensor is a second color depth, and the second color depth is greater than the first color depth.
  • the second camera may be the camera 200 in the example shown in FIG. 1.
  • the first camera is always on to collect image data, and compared with the second camera, the first camera has a low color depth and the captured image has a low resolution, and carries less information. This reduces the damage to the security and privacy of the user information caused by the leakage of the rich information of the images taken by the always-on camera, and improves the security of the user information.
  • the second camera is used for face recognition to unlock the electronic device.
  • the second camera has high color depth and high-resolution images, which can improve the accuracy of face recognition while ensuring user information security.
  • the recognition result is that a face image is detected, and the current state is that the electronic device is in a bright-screen state; the electronic device performs related operations, including: connecting the electronic device
  • the display of is set to be always on.
  • the first camera that is always on can detect the face image in real time, and if the face image is detected, the display screen is set to be always on, thereby bringing convenience to the user.
  • the always-on first camera has a low color depth and the captured image has a low resolution, carrying less information. Therefore, the damage to the security and privacy of the user information caused by the leakage of the rich information of the image captured by the first camera is reduced, and the security of the user information is improved.
  • the recognition result is that the face image of the first user is detected, and the current state is that the electronic device is in a locked state; the electronic device performs related operations, including: The electronic device calls the second camera to perform face recognition, and if the recognition is successful, the user desktop corresponding to the first user is displayed.
  • the always-on first camera is used to sense the face image and recognize the user, thereby improving the ability of the electronic device to recognize the face image and ensuring that the same
  • the isolation of information between users of electronic devices improves the convenience for users to unlock electronic devices.
  • the second camera with high color depth and high resolution of the captured image is called to perform face recognition again.
  • the electronic device is unlocked, which can improve the face unlocking performance. Reliability and safety.
  • the image data output by the second camera sensor has a high color depth and the captured image has a high resolution, and the image data is only turned on and collected when it is called.
  • the electronic device is in the locked state, the user's authority to operate the electronic device is restricted. After unlocking the electronic device, the user can operate the electronic device.
  • the recognition result is that a gesture image is detected, and the current state is that the electronic device is in a call state; the electronic device performing related operations includes: turning off or turning on a microphone of the electronic device.
  • the electronic device stores a correspondence between a gesture image and turning off the microphone, or a correspondence between a gesture image and turning on the microphone.
  • the electronic device stores a correspondence relationship between the gesture image of the palm being extended to grip and turning off the microphone, and the correspondence relationship between the gesture image of the hand being grasped until the expansion and turning on the microphone is stored.
  • the current state is that the electronic device displays an application interface; the recognition result is that the angular deviation between the upward direction of the face image and the upward direction of the application interface is greater than or equal to The first threshold; the electronic device performs related operations, including: the electronic device adjusts the upward direction of the application interface, and adjusts the angular deviation between the upward direction of the application interface and the upward direction of the face image to Less than or equal to the first threshold.
  • the always-on first camera can detect the angle between the upward direction of the face image and the upward direction of the application interface in real time, and switch between horizontal and vertical screens according to the angle, thereby bringing convenience to users.
  • the always-on first camera has a low color depth and the captured image has a low resolution, carrying less information. Therefore, the damage to the security and privacy of the user information caused by the leakage of the rich information of the image captured by the first camera is reduced, and the security of the user information is improved.
  • the first threshold is 45 degrees.
  • the recognition result is that the driving environment in the vehicle is detected, and the current state is that the electronic device is in a working state; the electronic device performs related operations, including: the electronic device Turn on the driving mode.
  • the driving mode the electronic device can broadcast incoming call information and messages, and can perform related operations in response to the user's voice.
  • the always-on first camera can detect the current environment in real time, and switch to the driving mode based on the environment being the driving position, thereby bringing convenience to the user.
  • the always-on first camera has a low color depth and the captured image has a low resolution, carrying less information. Therefore, the damage to the security and privacy of the user information caused by the leakage of the rich information of the image captured by the first camera is reduced, and the security of the user information is improved.
  • the density of the image sensitive units on the first image sensitive unit array is less than The density of image sensitive units on the second image sensitive unit array
  • the area of the first image sensitive unit array is smaller than that of the The area of the second image-sensitive unit array.
  • the first sensor includes a first analog-to-digital conversion circuit, and the detection accuracy of the analog-side circuit in the first analog-to-digital conversion circuit is related to the first color depth;
  • the second The camera includes a second sensor, the second sensor includes a second analog-to-digital conversion circuit, the color depth of the image data output by the second sensor is the second color depth, and the analog side circuit in the second analog-to-digital conversion circuit
  • the detection accuracy is related to the second color depth; the detection accuracy of the analog side circuit in the first analog to digital conversion circuit is less than the detection accuracy of the analog side circuit in the second analog to digital conversion circuit, and the first color depth is less than The second color depth.
  • the level detection accuracy of the analog side circuit can be reduced from 1/256 to 1/16 to reduce the color depth of the image data output by the sensor of the camera 100.
  • the recognition result is that a human body image is detected
  • the current state is that the electronic device stores a first instruction
  • the first instruction indicates that the human body image is detected and the terminal Sending an alarm message
  • the electronic device performing related operations includes: the electronic device sending the alarm message to the terminal.
  • the recognition result is that a fall gesture is detected
  • the current state is that the electronic device stores a second instruction, and the second instruction indicates that the fall gesture is detected.
  • Sending an alarm message to the terminal; the electronic device performing related operations includes: the electronic device sending the alarm message to the terminal.
  • the first camera that is always on in the home security scene can detect human body images in real time, and alarm according to the settings when the human image is detected, or alarms according to the settings when the falling posture is detected, thereby bringing convenience to users.
  • the normally-on first camera has a low color depth and the captured image has a low resolution, carrying less information. Therefore, the damage to the security and privacy of the user information caused by the leakage of the rich information of the image captured by the first camera is reduced, and the security of the user information is improved.
  • the present application provides an electronic device, including: one or more processors, a memory, and a first camera, wherein: the first camera includes a first sensor, and the first sensor includes a first image sensor.
  • the memory is used to store computer program codes, the computer program codes including computer instructions, when the one or more processors execute the computer instructions, the electronic device is caused to execute the first aspect or any one of the first aspects.
  • the present application provides a computer storage medium, including computer instructions, which when the computer instructions run on an electronic device, cause the electronic device to execute any possible implementation as in the first aspect or the first aspect The method provided by the method.
  • the embodiments of the present application provide a computer program product, which when the computer program product runs on a computer, causes the computer to execute the method provided in the first aspect or any one of the possible implementation manners of the first aspect.
  • the electronic equipment described in the second aspect, the computer storage medium described in the third aspect, or the computer program product described in the fourth aspect provided above are all used to execute the first aspect or any one of the first aspect.
  • FIG. 1 is a schematic structural diagram of a camera on an electronic device provided by an embodiment of the present application
  • FIG. 2 is a schematic structural diagram of an electronic device 10 provided by an embodiment of the present application.
  • FIG. 3 is a schematic structural diagram of a hardware module 400 and a software module 500 of a camera 100 according to an embodiment of the present application;
  • FIG. 4 is a schematic diagram of a horizontal and vertical screen switching provided by an embodiment of the present application.
  • FIG. 5 is a schematic diagram of an image taken by a camera 100 according to an embodiment of the present application.
  • FIG. 6 is a schematic structural diagram of a hardware module 600 and a software module 700 of another smart camera provided by an embodiment of the present application;
  • FIG. 7 is a schematic structural diagram of a smart camera provided by an embodiment of the present application.
  • FIG. 8 is a schematic flowchart of an image recognition method provided by an embodiment of the present application.
  • Electronic equipment is equipped with cameras, microphones, global positioning system (global positioning system, GPS) chips, various sensors (such as magnetic field sensors, gravity sensors, gyroscope sensors, etc.) and other devices to sense the external environment and user actions Wait.
  • the electronic device According to the perceived external environment and the user's actions, the electronic device provides the user with a personalized and contextual business experience.
  • the camera can obtain rich and accurate information so that the electronic device can perceive the external environment and user actions.
  • the camera on the electronic device can be always on when the electronic device is turned on.
  • the camera always on means that when the electronic device is turned on, the camera is always in the working state without being called and can collect image data, and the electronic device can perform image recognition based on the collected image data.
  • the display screen of the electronic device may be in the off-screen state or the on-screen state when the electronic device is turned on.
  • the electronic device can still receive messages (such as instant application messages) and perform functions such as positioning and step counting when the electronic device is turned on in the off-screen state.
  • the always-on camera in the electronic device is also in working state to collect
  • the electronic device can perform image recognition based on the collected image data, so that the electronic device can perceive the external environment, user actions, and so on.
  • the normally-on camera is similarly in working state to collect image data.
  • the following is an example of using a always-on camera to provide users with a personalized and contextualized business experience.
  • Example 1 The front camera of an electronic device is always on. When a human face is detected, the analysis result is output so that the display of the electronic device is always on, so that the user can keep the display of the electronic device always on.
  • Example 2 The front camera is always on. When a human face is detected, it is judged whether the user uses the electronic device in landscape or portrait mode according to the shooting direction of the face image detected by the front camera. When the judgment result is that the user uses the electronic device in a horizontal screen, the horizontal screen display is performed, and the judgment result is that the user uses the electronic device in a vertical screen, and the vertical screen display is performed.
  • Example 3 The camera is always on to detect image data, and the electronic device determines that the current environment is the driving environment in the car based on the detected image data, and then adjusts the electronic device to the driving mode.
  • the electronic device can display a navigation page to perform a navigation function, and can also perform a dialing function or play an audio function in response to a user's voice command.
  • the electronic device can automatically broadcast incoming calls or short messages without the user's manual operation.
  • There is also a voice function which can be executed by the user clicking the button on the electronic device to speak the voice electronic device, such as "play music", "call XX.”
  • Example 4 When the display screen of the electronic device is in the off-screen state, the front camera is always on. When the front camera detects the face image data, the electronic device performs face recognition according to the collected face image data, and unlocks the electronic device after successful face recognition. After the electronic device is unlocked, the electronic device can display a desktop, and the desktop contains application icons. Application icons can call applications in response to user operations, such as "camera”, "music", and "video” applications.
  • the embodiment of the present application provides a camera, which can be applied to an electronic device.
  • the camera is always on.
  • Electronic devices can be implemented as any of the following devices that include a camera: mobile phones, tablet computers (pad), portable game consoles, handheld computers (personal digital assistant, PDA), notebook computers, ultra mobile personal computers (UMPC) ), handheld computers, netbooks, vehicle-mounted media playback devices, wearable electronic devices, virtual reality (VR) terminal devices, augmented reality (augmented reality, AR) terminal devices and other digital display products.
  • FIG. 1 is a schematic structural diagram of a camera on an electronic device according to an embodiment of the present application.
  • the electronic device 10 may include a camera 100, a camera 200 and a display screen 300.
  • the camera 100 has a low color depth and the captured image has a low resolution.
  • the camera 100 is always on when the electronic device 10 is turned on.
  • the camera 100 is used to perceive the external environment and user actions.
  • the camera 200 can be used to collect face image data for face recognition, and thereby realize the unlocking of the electronic device 10, identity verification, application unlocking, and the like.
  • the image data output by the sensor of the camera 200 has a higher color depth and the captured image has a higher resolution.
  • the resolution of the image captured by the camera 100 is 200 ⁇ 200, that is, the pixel value of each side of the captured image is 200.
  • the resolution of the image captured by the camera 200 is 2560 ⁇ 1920, that is, the pixel value in the width direction of the captured image is 2560 and the pixel value in the height direction is 1920.
  • the color depth of the image data output by the camera 100 sensor is 4 bits, and each pixel can output 16 levels (2 to the 4th power) image data of one of white or three primary colors (red, green, and blue).
  • the color depth of the image data output by the sensor of the camera 200 is 8 bits, and each pixel can output 256 levels (2 to the 8th power) image data of one of white or three primary colors (red, green, and blue).
  • the normally-on camera 100 is used to sense the external environment and the user's actions, thereby improving the ability of the electronic device to perceive the environment and improving the convenience of the user in using the electronic device.
  • the normally-on camera 100 has a low color depth and the captured images have low resolution, which can reduce the damage to the security and privacy of user information caused by the leakage of rich information on the captured images, and improve the user Information security.
  • Image resolution can be expressed as the number of pixels in each direction.
  • the resolution of 640 ⁇ 480 means that the number of pixels in the width direction of the image taken by the camera is 640, and the number of pixels in the height direction is 480, which can be obtained by a camera with 30,720 pixels (about 300,000 pixels).
  • an image with a resolution of 1600 ⁇ 1200 can be captured by a camera with 1920000 pixels.
  • the resolution of the image taken by the camera is determined by the number of image-sensitive units in the image-sensitive unit array in the camera.
  • the image sensitive unit array refer to the example described in FIG. 3.
  • the resolution of the image captured by the camera is 128 ⁇ 96.
  • the color depth is also called the number of color bits, and the unit of binary bit (bit) represents the number of recorded tones.
  • bit represents the number of recorded tones.
  • Image data with a certain color depth is calculated using a demosaicing algorithm using image data of another color depth output by the camera sensor.
  • the color depth of the image data output by the camera sensor determines the color depth of the image.
  • the image data output by the camera sensor is at least 8-bit color depth, that is, one of white or three primary colors (red, green, and blue) is divided into (2 to the 8th power) 256 different levels. Therefore, the greater the color depth of the image data output by the camera sensor, the greater the color depth of the captured image, the more true the color of the image can be restored, and the more information about the photographed object is carried by the image.
  • FIG. 2 is a schematic structural diagram of an electronic device 10 provided by an embodiment of the present application.
  • the electronic device 10 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, and an antenna 2.
  • Mobile communication module 150 wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, earphone jack 170D, sensor module 180, buttons 190, motor 191, indicator 192, camera 193, display screen 194, and Subscriber identification module (subscriber identification module, SIM) card interface 195, etc.
  • SIM Subscriber identification module
  • the sensor module 180 can include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, and ambient light Sensor 180L, bone conduction sensor 180M, etc.
  • the structure illustrated in the embodiment of the present invention does not constitute a specific limitation on the electronic device 10.
  • the electronic device 10 may include more or fewer components than those shown in the figure, or combine certain components, or split certain components, or arrange different components.
  • the illustrated components can be implemented in hardware, software, or a combination of software and hardware.
  • the processor 110 may include one or more processing units.
  • the processor 110 may include an application processor (AP), a modem processor, a graphics processing unit (GPU), and an image signal processor. (image signal processor, ISP), controller, memory, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural-network processing unit (NPU) Wait.
  • AP application processor
  • modem processor modem processor
  • GPU graphics processing unit
  • image signal processor image signal processor
  • ISP image signal processor
  • controller memory
  • video codec digital signal processor
  • DSP digital signal processor
  • NPU neural-network processing unit
  • the different processing units may be independent devices or integrated in one or more processors.
  • the controller may be the nerve center and command center of the electronic device 10.
  • the controller can generate operation control signals according to the instruction operation code and timing signals to complete the control of fetching and executing instructions.
  • a memory may also be provided in the processor 110 to store instructions and data.
  • the processor 110 may include one or more interfaces.
  • the interface may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous transceiver receiver/transmitter, UART) interface, mobile industry processor interface (MIPI), general-purpose input/output (GPIO) interface, subscriber identity module (SIM) interface, and / Or Universal Serial Bus (USB) interface, etc.
  • I2C integrated circuit
  • I2S integrated circuit built-in audio
  • PCM pulse code modulation
  • PCM pulse code modulation
  • UART universal asynchronous transceiver receiver/transmitter
  • MIPI mobile industry processor interface
  • GPIO general-purpose input/output
  • SIM subscriber identity module
  • USB Universal Serial Bus
  • the interface connection relationship between the modules illustrated in the embodiment of the present invention is merely a schematic description, and does not constitute a structural limitation of the electronic device 10.
  • the electronic device 10 may also adopt different interface connection modes in the foregoing embodiments, or a combination of multiple interface connection modes.
  • the charging management module 140 is used to receive charging input from the charger.
  • the charger can be a wireless charger or a wired charger.
  • the power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110.
  • the power management module 141 receives input from the battery 142 and/or the charge management module 140, and supplies power to the processor 110, the internal memory 121, the external memory, the display screen 194, the camera 193, and the wireless communication module 160.
  • the wireless communication function of the electronic device 10 can be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, the modem processor, and the baseband processor.
  • the antenna 1 and the antenna 2 are used to transmit and receive electromagnetic wave signals.
  • the mobile communication module 150 can provide a wireless communication solution including 2G/3G/4G/5G and the like applied to the electronic device 10.
  • the mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (LNA), etc.
  • the modem processor may include a modulator and a demodulator.
  • the modulator is used to modulate the low frequency baseband signal to be sent into a medium and high frequency signal.
  • the demodulator is used to demodulate the received electromagnetic wave signal into a low-frequency baseband signal. Then the demodulator transmits the demodulated low-frequency baseband signal to the baseband processor for processing.
  • the low-frequency baseband signal is processed by the baseband processor and then passed to the application processor.
  • the antenna 1 of the electronic device 10 is coupled with the mobile communication module 150, and the antenna 2 is coupled with the wireless communication module 160, so that the electronic device 10 can communicate with the network and other devices through wireless communication technology.
  • the electronic device 10 implements a display function through a GPU, a display screen 194, and an application processor.
  • the GPU is a microprocessor for image processing, connected to the display 194 and the application processor.
  • the GPU is used to perform mathematical and geometric calculations for graphics rendering.
  • the processor 110 may include one or more GPUs, which execute program instructions to generate or change display information.
  • the display screen 194 is used to display images, videos, etc.
  • the electronic device 10 may include one or N display screens 194, and N is a positive integer greater than one.
  • the electronic device 10 can realize a shooting function through an ISP, a camera 193, a video codec, a GPU, a display screen 194, and an application processor.
  • the ISP is used to process the data fed back from the camera 193. For example, when taking a picture, the shutter is opened, the light is transmitted to the photosensitive element of the camera through the lens, the light signal is converted into an electrical signal, and the photosensitive element of the camera transfers the electrical signal to the ISP for processing and is converted into an image visible to the naked eye.
  • ISP can also optimize the image noise, brightness, and skin color. ISP can also optimize the exposure, color temperature and other parameters of the shooting scene.
  • the ISP may be provided in the camera 193.
  • the camera 193 is used to capture still images or videos.
  • the object generates an optical image through the lens and projects it to the photosensitive element.
  • the photosensitive element is the image sensitive unit array in the sensor 420 described in FIG. 3 or the image sensitive unit array in the sensor 630 described in FIG. 6.
  • the photosensitive element may be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor.
  • CMOS complementary metal-oxide-semiconductor
  • the photosensitive element converts the optical signal into an electrical signal, and then transmits the electrical signal to the ISP to convert it into a digital image signal.
  • ISP outputs digital image signals to DSP for processing.
  • DSP converts digital image signals into standard RGB, YUV and other formats.
  • the electronic device 10 may include 1 or N cameras 193, and N is a positive integer greater than 1.
  • the N cameras 193 may include the camera 100 and the camera 200 shown in FIG. 1.
  • the sensor 420 in the camera 100 described in FIG. 3 and the sensor 630 described in FIG. 6 are not included in the sensor module 180.
  • the sensor 420 described in FIG. 3 and the sensor 630 described in FIG. 6 are components of the camera.
  • the image sensitive unit array in the sensor 420 described in FIG. 3 and the image sensitive unit array in the sensor 630 described in FIG. 6 are the photosensitive elements in the aforementioned camera 193.
  • the image sensitive unit array in the sensor 420 and the image sensitive unit array in the sensor 630 are, for example, a charge coupled device CCD image sensor or a CMOS image sensor.
  • Digital signal processors are used to process digital signals. In addition to digital image signals, they can also process other digital signals. For example, when the electronic device 10 selects the frequency point, the digital signal processor is used to perform Fourier transform on the energy of the frequency point.
  • Video codecs are used to compress or decompress digital video.
  • the electronic device 10 may support one or more video codecs. In this way, the electronic device 10 can play or record videos in a variety of encoding formats, such as: moving picture experts group (MPEG) 1, MPEG2, MPEG3, MPEG4, etc.
  • MPEG moving picture experts group
  • MPEG2 MPEG2, MPEG3, MPEG4, etc.
  • NPU is a neural-network (NN) computing processor.
  • NN neural-network
  • the NPU can realize applications such as intelligent cognition of the electronic device 10, such as image recognition, face recognition, voice recognition, text understanding, and so on.
  • the face detection example, face recognition example, gesture detection example, and environment detection example in the embodiments of the present application are implemented.
  • the external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to expand the storage capacity of the electronic device 10.
  • the external memory card communicates with the processor 110 through the external memory interface 120 to realize the data storage function. For example, save music, video and other files in an external memory card.
  • the internal memory 121 may be used to store computer executable program code, where the executable program code includes instructions.
  • the processor 110 executes various functional applications and data processing of the electronic device 10 by running instructions stored in the internal memory 121.
  • the internal memory 121 may include a storage program area and a storage data area.
  • the internal memory 121 may include a high-speed random access memory, and may also include a non-volatile memory.
  • the electronic device 10 can implement audio functions through the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the earphone interface 170D, and the application processor. For example, music playback, recording, etc.
  • the audio module 170 is used to convert digital audio information into an analog audio signal for output, and is also used to convert an analog audio input into a digital audio signal.
  • the audio module 170 can also be used to encode and decode audio signals.
  • the speaker 170A also called a “speaker” is used to convert audio electrical signals into sound signals.
  • the electronic device 10 can listen to music through the speaker 170A, or listen to a hands-free call.
  • the receiver 170B also called “earpiece” is used to convert audio electrical signals into sound signals.
  • the electronic device 10 answers a call or voice message, it can receive the voice by bringing the receiver 170B close to the human ear.
  • the microphone 170C also called “microphone”, “microphone”, is used to convert sound signals into electrical signals.
  • the earphone interface 170D is used to connect wired earphones.
  • the pressure sensor 180A is used to sense the pressure signal and can convert the pressure signal into an electrical signal.
  • the pressure sensor 180A may be provided on the display screen 194.
  • the gyro sensor 180B may be used to determine the movement posture of the electronic device 10.
  • the angular velocity of the electronic device 10 around three axes ie, x, y, and z axes
  • the gyroscope sensor 180B can be determined by the gyroscope sensor 180B.
  • the air pressure sensor 180C is used to measure air pressure.
  • the electronic device 10 calculates the altitude based on the air pressure value measured by the air pressure sensor 180C to assist positioning and navigation.
  • the magnetic sensor 180D includes a Hall sensor.
  • the electronic device 10 can use the magnetic sensor 180D to detect the opening and closing of the flip holster.
  • the acceleration sensor 180E can detect the magnitude of the acceleration of the electronic device 10 in various directions (generally three axes). When the electronic device 10 is stationary, the magnitude and direction of gravity can be detected.
  • Distance sensor 180F used to measure distance.
  • the proximity light sensor 180G may include, for example, a light emitting diode (LED) and a light detector such as a photodiode.
  • LED light emitting diode
  • photodiode a light detector
  • the ambient light sensor 180L is used to sense the brightness of the ambient light.
  • the fingerprint sensor 180H is used to collect fingerprints.
  • the electronic device 10 can use the collected fingerprint characteristics to implement fingerprint unlocking, access application locks, fingerprint photographs, fingerprint answering calls, and so on.
  • the temperature sensor 180J is used to detect temperature.
  • the electronic device 10 uses the temperature detected by the temperature sensor 180J to execute a temperature processing strategy.
  • Touch sensor 180K also called “touch panel”.
  • the touch sensor 180K may be disposed on the display screen 194, and the touch screen is composed of the touch sensor 180K and the display screen 194, which is also called a “touch screen”.
  • the bone conduction sensor 180M can acquire vibration signals. In some embodiments, the bone conduction sensor 180M can obtain the vibration signal of the vibrating bone mass of the human voice.
  • the button 190 includes a power button, a volume button, and so on.
  • the button 190 may be a mechanical button. It can also be a touch button.
  • the electronic device 10 can receive key input and generate key signal input related to user settings and function control of the electronic device 10.
  • the motor 191 can generate vibration prompts.
  • the indicator 192 may be an indicator light, which may be used to indicate the charging status, power change, or to indicate messages, missed calls, notifications, and so on.
  • the SIM card interface 195 is used to connect to the SIM card.
  • the camera 100 is the normally-on camera 100 in the example described in FIG. 1 and can be applied to an electronic device 10 such as a mobile phone.
  • FIG. 3 is a schematic structural diagram of a hardware module 400 and a software module 500 of a camera 100 provided by an embodiment of the present application.
  • the hardware module 400 of the camera 100 includes a lens group 410, a sensor 420 and an image signal processing unit 430. among them:
  • the lens group 410 may include one or more lenses for refracting the incoming light to form an image on the sensor 420.
  • the sensor 420 is used to convert optical signals to electrical signals, convert the optical signals from the lens group into electrical signals, and transmit them to the image signal processing unit 430.
  • the sensor 420 may include an image-sensitive unit array and an analog-to-digital (A/D) circuit.
  • the image-sensitive unit array includes a plurality of image-sensitive units, the image-sensitive unit array receives the image formed by the light from the lens group, and each image-sensitive unit converts the image formed on it into an electrical signal.
  • the electrical signal is a signal in analog form.
  • the number of image sensitive units determines the pixels of the camera 100, that is, determines the resolution of the image captured by the camera 100.
  • the analog-to-digital conversion circuit is used to convert the electrical signal in analog form output by each pixel unit into a digital signal. The design of the analog-to-digital conversion circuit determines the color depth of the image data output by the camera sensor.
  • the image-sensitive unit array is an array formed by arranging a plurality of image-sensitive units, such as uniformly arranged in a rectangular shape.
  • the number of image-sensitive units in the horizontal and vertical directions of the image-sensitive unit array is 128 and 96, respectively, and the camera 100 captures an image.
  • the resolution is 128 ⁇ 96.
  • the array of image sensitive units may be arranged in a circular array, and the number of image sensitive units on the diameter of the circular array is less than or equal to 200, for example, the number of image sensitive units on the diameter is 128.
  • the camera 100 in the embodiment of the present application has a lower color depth than the camera 200 and the captured image has a lower resolution.
  • the resolution of the image taken by the camera 100 is 128 ⁇ 96
  • the color depth of the image data output by the camera sensor is 4 bits
  • each pixel can describe a certain color of white or three primary colors (red, green, and blue) 16 levels.
  • the sensor 420 makes the output image data have a low color depth and the captured image has a low resolution.
  • the resolution of the image captured by the camera 100 may also be x ⁇ y, and both x and y are positive integers less than or equal to 200.
  • the color depth is less than 8 bits.
  • the color depth of the camera 100 refers to the color depth of the image data output by the sensor in the camera 100.
  • the color depth of the camera 100 is 4 bits, that is, the color depth of the image data output by the sensor in the camera 100 is 4 bits.
  • the camera 100 is the first camera, and the sensor 420 in FIG. 3 is the first sensor.
  • the sensor 420 includes a first image-sensitive unit array.
  • the first image-sensitive unit array has the largest image-sensitive unit in the horizontal and vertical directions. The number is less than or equal to 200, the color depth of the image data output by the first sensor is the first color depth, and the first color depth is less than 8 bits.
  • the camera 200 is a second camera, the second camera includes a second sensor, and the second sensor includes a second image sensitive unit array.
  • the number of image sensitive units in the second image sensitive unit array is larger than that in the first image sensitive unit array.
  • the number of units, the color depth of the image data output by the second sensor is the second color depth, and the second color depth is greater than the first color depth.
  • Implementation mode 1 On the basis of the existing high-resolution camera sensor (such as the sensor included in the camera 200 in the example shown in FIG. 1), the area of the image-sensitive unit array is reduced, and the density of the existing image-sensitive unit is maintained No change, so as to reduce the number of image sensitive units to reduce the pixels of the camera, so as to realize the camera 100 with low resolution.
  • Implementation mode 2 On the basis of the existing high-resolution camera sensor (such as the sensor included in the camera 200 in the example shown in FIG. 1), the area of the image-sensitive unit array is kept unchanged, and the gap between the image-sensitive units is increased. Therefore, the number of image sensitive units is reduced to realize the camera 100 with low resolution.
  • the color depth of the image data output by the sensor of the camera 100 can be reduced.
  • the level detection accuracy of the analog side circuit in the A/D circuit can be reduced to obtain the camera 100 with lower color depth.
  • the level detection accuracy of the analog side circuit can be reduced from 1/256 to 1/16 to reduce the color depth of the image data output by the sensor of the camera 100.
  • the camera 100 with a color depth of 4 bits can be obtained through the above-mentioned simplified A/D circuit method.
  • the image processing unit 430 is configured to receive image data in digital form from the sensor 420 to implement image encoding. Specifically, the image processing unit 430 performs demosaicing, automatic exposure control (AEC), automatic gain control (AGC), and automatic white balance (AWB) on the received image data. , Color correction and other processing. In the embodiment of the present application, the demosaicing process combines the 4-bit image data output by each image-sensitive unit with the image data output by several other image-sensitive units around to calculate 8-bit RGB image data. That is, 8 bits is the color depth of the corresponding image.
  • the image processing unit 430 may output image data to the software module 500.
  • the image processing unit 430 may be implemented by the ISP in the electronic device shown in FIG. 2.
  • the software module 500 receives image data from the image processing unit 430 to realize face detection, face recognition, gesture detection, and environment detection.
  • the face detection is used to determine whether the image data detected by the hardware module 400 is face image data.
  • the face recognition is used to identify the user to which the face image data detected by the hardware module 400 belongs.
  • the gesture detection is used to determine whether the image data detected by the hardware module 400 is gesture image data.
  • the environment detection is used to determine the current environment based on the image data detected by the hardware module 400.
  • the software module 500 outputs analysis results after performing face detection, face recognition, gesture detection, and environment detection, and the electronic device 10 performs related operations according to the analysis results. The following are examples of operations performed after the analysis results of face detection, face recognition, gesture detection, and environment detection are output.
  • the electronic device 10 collects image data through the camera 100, and matches the features of the collected image data with the pre-stored facial features. If the matching is successful, the analysis result is output indicating a human face.
  • the electronic device 10 keeps the display screen always on. If the current electronic device 10 is in the on-screen state, the electronic device can detect whether the current display screen is set to the normally-on state. If it is detected that the current display screen is not set to the always-on state, the electronic device 10 may set the display screen to be always-on according to the analysis result.
  • the display screen After the display screen is set to the always-on state, even if no operation (such as a touch operation) is received, the display screen will continue to be lit until the display screen of the electronic device is turned off.
  • the electronic device 10 can perform face detection on the image data collected by the camera 100 at a certain frequency. When the analysis result indicates "not a face image” or indicates "a ceiling image” or indicates other images, the electronic device 10 can turn off the display screen. Always on. After the display is turned off and always on, if no operation (such as a touch operation) is received for a period of time (such as 1 minute), the display will remain on the screen until it is awakened again.
  • the display will be Wake up again and light up for display.
  • the analysis result indicates "face image”
  • the electronic device 10 may detect whether there is a human face when the screen is turned on for a preset time. If a human face is detected, the screen will not be stopped, that is, no rest screen operation will be performed; if no human face is detected, the screen will be stopped. .
  • the always-on camera 100 can detect the face image in real time, and if the face image is detected, the display screen is set to be always on, thereby bringing convenience to the user.
  • the normally-on camera 100 has a low color depth and the captured image has a low resolution, and carries less information. Therefore, the damage to the security and privacy of the user information caused by the leakage of the rich information of the image captured by the camera 100 is reduced, and the security of the user information is improved.
  • the electronic device 10 when the electronic device 10 performs face detection and the analysis result indicates a human face, if the electronic device 10 is currently in the locked state, the electronic device 10 can also call the camera 200 with high color depth and high resolution to capture the human face. Image data for face recognition. The face recognition is performed according to the face image data collected by the camera 200. If the recognition is successful, the electronic device 10 is unlocked and the user desktop is displayed.
  • the user desktop may include application icons.
  • the high color depth of the camera 200 means that, compared with the camera 100, the color depth of the image data output by the sensor in the camera 200 is higher.
  • the color depth of the camera 200 is 8 bits, that is, the color depth of the image data output by the sensor in the camera 200 is 8 bits.
  • the image captured by the camera 200 has a high resolution, which means that compared with the camera 100, the number of image-sensitive unit arrays in the sensor of the camera 200 is larger, for example, the number of pixels is 1920000.
  • the camera 100 is normally open to collect image data.
  • the camera 100 has a low color depth and the captured image has a low resolution. Carrying less information, thereby reducing the damage to the security and privacy of user information caused by the leakage of rich information captured by the always-on camera, and improving the security of user information.
  • the camera 200 is used to perform face recognition to unlock the electronic device.
  • the camera 200 has a high color depth and the captured images have high resolution, which can improve the accuracy of face recognition while ensuring the security of user information.
  • the camera 100 when the camera 100 is called to detect a face, the camera 100 can detect the upward direction of the face image.
  • the electronic device can determine the direction in which the user views the electronic device according to the upward direction of the face image. If the electronic device displays an application interface, such as an instant messaging application interface or an e-book reading application interface. When it is detected that the deviation between the upper direction of the application interface and the upper direction of the user's face image exceeds a certain threshold (for example, 45 degrees left and right), the electronic device can adjust the display direction of the application interface according to the upper direction of the user's face image to ensure the user's face The upward direction of the image and the display upward direction of the application interface remain within the threshold.
  • a certain threshold for example, 45 degrees left and right
  • FIG. 4 is a schematic diagram of horizontal and vertical screen switching provided by an embodiment of the present application.
  • the display area 300 of the electronic device 10 displays an application interface 1400, and the upper direction of the application interface 1400 is parallel to the long side of the electronic device 10.
  • the electronic device 10 uses the camera 100 to collect a face image 1500, and detects whether the angle between the upward direction of the face image 1500 and the upward direction of the application interface 1400 exceeds a first threshold, for example, 45 degrees.
  • a first threshold for example, 45 degrees
  • the angle between the upward direction of the face image 1500 and the upward direction of the application interface 1400 does not exceed the first threshold, for example, the angle is 0 degrees
  • the electronic device 10 The application interface 1400 is still displayed on the vertical screen.
  • the electronic device 10 uses the camera 100 to collect the face image 1500, and detects the upward direction of the face image 1500, which is parallel to the direction of the electronic device 10. Short side. The electronic device 10 detects that the angle between the upward direction of the face image 1500 and the upward direction of the application interface 1400 is 90 degrees, which exceeds the first threshold of 45 degrees. 4(c), the electronic device 10 can adjust the display direction of the application interface 1400 according to the upward direction of the user's face image 1500 to ensure that the upward direction of the user's face image 1500 and the upward direction of the application interface 1400 are maintained. Within this threshold. It is understandable that the foregoing example that the first threshold is 45 degrees is only used to explain the embodiment of the present application and should not constitute a limitation. The first threshold may also be a larger or smaller angle.
  • the normally-on camera 100 can detect the angle between the upward direction of the face image 1500 and the upward direction of the application interface 1400 in real time, and switch the horizontal and vertical screens according to the angle, thereby User brings convenience.
  • the normally-on camera 100 has a low color depth and the captured image has a low resolution, and carries less information. Therefore, the damage to the security and privacy of the user information caused by the leakage of the rich information of the image captured by the camera 100 is reduced, and the security of the user information is improved.
  • the electronic device 10 may pre-store users A and B, and the facial features corresponding to A and the facial features corresponding to B. For each user, the electronic device 10 (such as a mobile phone) corresponds to a user desktop. If the electronic device 10 outputs an analysis result indicating "A's face image" through face recognition, the electronic device 10 is in the locked state, and the electronic device 10 displays the user desktop corresponding to A after being unlocked. In some embodiments of the present application, the electronic device 10 performs the following steps to unlock and display the user desktop.
  • the electronic device 10 performs the following steps to unlock and display the user desktop.
  • Step 101 The electronic device 10 performs face recognition on the image data collected by the camera 100, and outputs an analysis result indicating "A's face image".
  • the electronic device 10 matches the features of the image data collected by the camera 100 with the facial features of A and the facial features of B, respectively, and the feature of the image data collected by the camera 100 matches the facial features of A successfully, then output
  • the analysis result indicates "A's face image”.
  • the feature of the image data collected by the camera 100 is successfully matched with the face feature of B, and the analysis result indicates "B's face image”.
  • user A may be the first user
  • user B may be the second user
  • Step 102 When the face recognition result of the electronic device 10 indicates “A's face image”, the electronic device 10 calls the camera 200 with high color depth and high resolution to capture face image data.
  • the camera 100 is normally open, the image data output by the sensor of the camera 200 has a high color depth and the captured image has a high resolution, and it is turned on and collected image data when it is called.
  • the face recognition result of the electronic device 10 indicates “A's face image”
  • the camera 200 with high color depth and high resolution of the captured image is called to unlock the face.
  • the electronic device 10 is in the locked state, the user's authority to operate the electronic device is limited. After the electronic device 10 is unlocked, the user can operate the electronic device 10. After the locked state is released, the electronic device can call various applications in response to user operations, such as "camera", "music", and "video” apps.
  • Step 103 The electronic device 10 performs face recognition according to the face image data collected by the camera 200.
  • Step 104 If the face recognition is successful according to the face image data collected by the camera 200, the electronic device 10 is unlocked, and the user desktop corresponding to A is displayed.
  • the process of performing face recognition based on the face image data collected by the camera 200 can be analogous to the camera 100, and will not be repeated here. That is, when the electronic device 10 performs face recognition using the face image collected by the camera 200 and obtains the result indicating "A's face image", it indicates that the face recognition is successful according to the face image data collected by the camera 200.
  • the normally-on camera 100 is used to perceive the face image and recognize the user, thereby improving the facial image recognition ability of the electronic device and ensuring that the same
  • the isolation of information between users A and B of the electronic device 10 improves the convenience for users to unlock the electronic device.
  • the camera 200 with high color depth and high-resolution captured images is called to perform face recognition again, and the electronic device is unlocked after the recognition is successful, which can improve the reliability of face unlocking And security.
  • the camera 100 is always on to collect image data. Compared with the camera 200, the camera 100 has a low color depth and the captured image has a low resolution, and carries less information. This reduces the damage to the security and privacy of the user information caused by the leakage of the rich information of the images taken by the always-on camera, and improves the security of the user information.
  • the camera 200 is used to perform face recognition to unlock the electronic device.
  • the camera 200 has a high color depth and a high-resolution image taken, which can improve the accuracy of face recognition while ensuring user information security.
  • the camera 100 with low color depth and low resolution of the captured images can recognize human faces.
  • the facial recognition is used in scenarios where the user does not have high security requirements, such as non-payment scenarios.
  • the scene corresponding to the user's desktop is displayed according to the face recognition result.
  • the scene of the playlist corresponding to the elderly or the child is played according to the face recognition result.
  • the camera 100 can recognize some human faces with obvious differences.
  • FIG. 5 shows an example of distinguishing a round face, a melon face, or a Chinese character face in the example.
  • FIG. 5 is a schematic diagram of an image taken by a camera 100 according to an embodiment of the present application.
  • the facial image collected by the camera 100 can only recognize the human face 1200 after image recognition, and distinguish round faces, melon seeds, or Chinese characters. face.
  • the electronic device 10 cannot distinguish detailed features such as double eyelids after image recognition.
  • the electronic device 10 cannot recognize the environment 1300 in which it is located after image recognition, or can only determine that the environment 1300 in which it is located is indoor or outdoor.
  • the aforementioned camera 100 has a low color depth and the image-sensitive unit array on the camera 100 makes the captured image have a low resolution and carry less information.
  • the use of camera 100 to capture images for face recognition, environmental detection, etc. reduces the harm caused by the leakage of normal open to the security and privacy of user information, and improves users Information security.
  • the smart speaker can set a playlist for the user A and the user B respectively.
  • A is an old man and B is a child.
  • the camera 100 has a low color depth and the captured image has a low resolution, the electronic device 10 can only distinguish that the face image is 1200 after performing image recognition through the face image collected by the camera 100. Still a child.
  • the electronic device 10 recognizes the image data collected by the camera 100, and when the recognition result indicates “the face image of the elderly”, the speaker can play a playlist corresponding to the elderly, such as a drama list.
  • the electronic device 10 recognizes the image collected by the camera 100, and when the recognition result indicates "a child's face image", the speaker can play a playlist corresponding to the child, such as a nursery rhyme list.
  • the electronic device 10 can store the features corresponding to the gestures. Specifically, it can store the features corresponding to the gesture of expanding the palm to grip and storing the features corresponding to the gesture of gripping the grip to expand.
  • the feature of the image data collected by the camera 100 is matched with the feature corresponding to the gesture of unfolding the palm to grip. If the match is successful, an analysis result indication is output: the palm is expanded to a clenching gesture. If it is detected that the current electronic device 10 is not set to mute, that is, the microphone of the current electronic device is not turned off, the electronic device 10 can be set to be muted according to the analysis result.
  • the electronic device 10 matches the feature of the image data collected by the camera 100 with the feature corresponding to the gesture of grasping the hand tightly to expand. If the matching is successful, the analysis result indication is output: the hand is tight to the unfolding gesture. If it is detected that the current electronic device 10 is set to mute, that is, the microphone of the current electronic device 10 is turned off, the electronic device 10 can be turned off according to the analysis result to be muted and the microphone is turned on.
  • the electronic device 10 performs environment detection according to the image data collected by the camera 100.
  • the electronic device 10 may store features corresponding to the environment template.
  • the environment template may include the driving position environment in the vehicle.
  • the electronic device 10 matches the characteristics of the image data collected by the camera 100 with the characteristics of the driving environment in the vehicle. If the matching is successful, the analysis result indication is output: the driving environment in the car.
  • the detection result indicates "the driving position environment in the vehicle”
  • the electronic device 10 may be set to the driving mode according to the analysis result.
  • the detection result indicates "the driving position environment in the vehicle”
  • the electronic device 10 detects that the driving mode is currently set, it ends.
  • different environment detection scenarios have different requirements for the color depth of the image data output by the camera sensor and the resolution of the captured image. If the color depth of the camera and the resolution of the captured image are high, for example, the electronic device 10 needs to be used to identify a certain position of the indoor ceiling, so as to realize that when the electronic device 10 is placed on the desktop and the camera captures the position on the ceiling, the electronic device 10 Prompt weather conditions.
  • the color depth of the image data output by the sensor of the camera 100 and the resolution of the captured image may be higher, the resolution is 200 ⁇ 200, and the color depth is 4 bits.
  • the camera is only required to be capable of face detection, so that the user can keep the display screen always on when watching the electronic device display screen.
  • the color depth of the image data output by the sensor of the camera 100 and the resolution of the captured image may be lower, the resolution is 144 ⁇ 96 or 128 ⁇ 96, and the color depth is 3 bits.
  • the electronic device 10 can detect some obvious and typical scenes through the camera 100, such as indoor environment, outdoor environment, and the aforementioned driving environment in a vehicle.
  • the camera 100 can only detect obvious and typical scenes, and will not always detect and track the user's accurate and detailed environment and the user's face detail information. In this way, some scenes can be identified, but accurate user information is not detected and tracked, which reduces the leakage of user privacy information.
  • the electronic device may also be implemented as a smart camera, a smart home appliance (refrigerator, desk lamp, air purifier, etc.) that includes a camera, and so on.
  • a smart camera is used for home monitoring.
  • FIG. 6 is a schematic structural diagram of a hardware module 600 and a software module 700 of a smart camera provided by an embodiment of the present application.
  • the camera can be used in home surveillance scenarios.
  • the smart camera includes a hardware module 600 and a software module 700.
  • the hardware module 600 includes a lens group 610, a sensor 620, and an image signal processing unit 630.
  • the smart camera in the embodiment of the present application has a low color depth and the captured image has a low resolution.
  • the resolution of the image taken by the smart camera is 144 ⁇ 96
  • the color depth is 4 bits
  • each pixel can output 16 levels (2 of 4) of one of white or three primary colors (red, green, and blue). Power).
  • the sensor 620 enables the smart camera to have a low color depth and the captured image has a low resolution.
  • the lens group 610 for the lens group 610, the sensor 620, and the image signal processing unit 630, reference may be made to the description of the lens group 410, the sensor 420, and the image signal processing unit 430 in the example shown in FIG. 3, which will not be repeated here.
  • the software module 700 receives image data from the image processing unit 630 to perform human body detection, posture detection, and the like.
  • the human body detection is used to determine whether the image data collected by the hardware module 600 is human body image data.
  • the posture detection is used to identify the posture of the human body in the human body image data detected by the hardware module 600, for example, postures such as falling, standing up, and lying down.
  • the software module 700 outputs analysis results after performing human body detection and posture detection, and the smart camera performs operations according to the analysis results.
  • the following are examples of human body detection and posture detection.
  • the smart camera matches the characteristics of the collected image data with the pre-stored human body characteristics. If the matching is successful, it outputs an analysis result indication: a human body image. If the user has set the smart camera to detect a human body image, it will send an alarm message to the user's mobile phone. When the detection result indicates "human body image", the smart camera can send an alarm message to the user's mobile phone according to the analysis result. Among them, the smart camera may store a first instruction, and the first instruction indicates that a human body image is detected and an alarm message is sent to the terminal (user's mobile phone).
  • the smart camera matches the features of the collected image data with the features of the pre-stored human posture, and if it matches the fall posture successfully, it outputs an analysis result indicating: fall posture. If the user has set the smart camera to detect the human body falling down, it will send an alarm message to the user’s mobile phone and make an emergency call. When the detection result indicates "falling posture", the smart camera can send an alarm message to the user’s mobile phone according to the analysis result or Make an emergency call. Among them, the smart camera may store a second instruction, and the second instruction instructs to send an alarm message to the terminal (user's mobile phone) or make an emergency call when the falling gesture is detected.
  • the always-on smart camera has low resolution and low color depth, which can reduce the leakage of rich information on the captured images.
  • Security of user information And the harm caused by privacy, improve the security of user information.
  • the smart camera is not limited to performing human body detection and posture detection, and can also be used for other detections.
  • the above examples of human body detection and posture detection on the smart camera are only used to explain the embodiments of the present application and should not constitute a limitation.
  • the smart camera can also perform other operations according to the detection result, which is not limited in the embodiment of the present application.
  • FIG. 7 is a schematic structural diagram of a smart camera provided by an embodiment of the present application.
  • the smart camera 20 includes a hardware module 600, one or more processors 800, a memory 900, and a communication interface 1000.
  • the hardware module 600, the processor 800, the memory 900, and the communication interface 1000 can use a bus or other means.
  • the embodiment of the present application takes the connection through the bus 1100 as an example.
  • the hardware module 600 may be the hardware module 600 in the example described in FIG. 3. among them:
  • the processor 800 may be composed of one or more general-purpose processors, such as a CPU.
  • the communication interface 1000 may be a wired interface (for example, an Ethernet interface) or a wireless interface (for example, a cellular network interface or using a wireless local area network interface) for communicating with other nodes.
  • the communication interface 1000 may be specifically used to communicate with electronic devices such as mobile phones.
  • the memory 900 may include volatile memory, such as RAM; the memory may also include non-volatile memory, such as ROM, flash memory, HDD, or SSD; 900 may also include a combination of the aforementioned types of memories.
  • volatile memory such as RAM
  • non-volatile memory such as ROM, flash memory, HDD, or SSD
  • 900 may also include a combination of the aforementioned types of memories.
  • the memory 900 may be used to store a set of program codes, so that the processor 800 can call the program codes stored in the memory 900 to implement the software modules in the example shown in FIG. 3.
  • the smart camera shown in FIG. 7 is only an implementation manner of the embodiment of the present application. In practical applications, the smart camera may also include more or fewer components, which is not limited here.
  • the above-mentioned low-resolution, low-color depth camera can also be applied to a speaker containing a camera, a vehicle-mounted camera, or a smart home device (such as a smart refrigerator) that contains a camera.
  • a speaker containing a camera a vehicle-mounted camera
  • a smart home device such as a smart refrigerator
  • the structure of the above-mentioned speaker, vehicle-mounted camera and smart home equipment can refer to the example shown in FIG. 7.
  • speakers it also includes speakers.
  • smart refrigerators it also includes a refrigeration system. It can also include other more or less components, which are not limited here.
  • the electronic device may include the above-mentioned smart camera, speaker, vehicle-mounted camera, or smart home equipment including the camera.
  • FIG. 8 is a schematic flowchart of an image recognition method provided by an embodiment of the present application. As shown in Figure 8, the image recognition method includes S101 to S102.
  • the electronic device calls the first camera to collect first image data.
  • the first camera includes a first sensor, the first sensor includes a first image sensitive unit array, the number of image sensitive units in the first image sensitive unit array is less than or equal to 40,000, and the color depth of the image data output by the first sensor is the first A color depth, the first color depth is less than 8 bits.
  • the first camera may be the camera 100 shown in FIG. 1, and its structure may refer to the example shown in FIG. 3.
  • the first color depth is 4 bits
  • the maximum number of the image-sensitive units in the horizontal and vertical directions of the image-sensitive unit array are 128 and 96, respectively, or the image-sensitive units in the horizontal and vertical directions of the image-sensitive unit array
  • the maximum number of sensitive units are 144 and 96 respectively.
  • the first camera is always turned on when the electronic device is working.
  • the electronic device performs image recognition on the first image data, and performs related operations according to the recognition result and the current state.
  • the first camera compared with the second camera, the first camera has a low color depth and the captured image has a low resolution, and carries less information. Therefore, the damage to the security and privacy of the user information caused by the leakage of the rich information of the image captured by the first camera is reduced, and the security of the user information is improved.
  • the current state is that the electronic device is in a locked state; the electronic device performs related operations, including: the electronic device calls the second camera to unlock the face.
  • the first camera is always on to collect image data, and compared with the second camera, the first camera has a low color depth and the captured image has a low resolution, which carries less information. This reduces the damage to the security and privacy of the user information caused by the leakage of the rich information of the images taken by the always-on camera, and improves the security of the user information.
  • the second camera is used for face recognition to unlock the electronic device.
  • the second camera has high color depth and high-resolution images, which can improve the accuracy of face recognition while ensuring user information security.
  • the second camera includes a second sensor
  • the second sensor includes a second image sensitive unit array
  • the number of image sensitive units in the second image sensitive unit array is greater than the number of image sensitive units in the first image sensitive unit array
  • the second sensor The color depth of the output image data is the second color depth, and the second color depth is greater than the first color depth.
  • the second camera may be the camera 200 in the example shown in FIG. 1.
  • the density of the image sensitive units on the first image sensitive unit array is smaller than that of the second image sensitive unit array.
  • the density of the upper image-sensitive unit when the density of the image-sensitive unit on the first image-sensitive unit array is the same as that of the second image-sensitive unit array, the area of the first image-sensitive unit array is smaller than that of the second image-sensitive unit array The area of the cell array.
  • the first sensor includes a first analog-to-digital conversion circuit, and the detection accuracy of the analog side circuit in the first analog-to-digital conversion circuit is related to the first color depth;
  • the second camera includes a second sensor,
  • the two sensors include a second analog-to-digital conversion circuit, the color depth of the image data output by the second sensor is the second color depth, and the detection accuracy of the analog side circuit in the second analog-to-digital conversion circuit is related to the second color depth;
  • the detection accuracy of the analog side circuit in the conversion circuit is less than the detection accuracy of the analog side circuit in the second analog-to-digital conversion circuit, and the first color depth is less than the second color depth.
  • the recognition result is that a human face image is detected, and the current state is that the electronic device is in a bright screen state; the electronic device performs related operations, including: setting the display screen of the electronic device to a normally bright state.
  • the first camera that is always on can detect the face image in real time, and if the face image is detected, the display screen is set to be always on, thereby bringing convenience to the user.
  • the always-on first camera has a low color depth and the captured image has a low resolution, carrying less information. Therefore, the damage to the security and privacy of the user information caused by the leakage of the rich information of the image captured by the first camera is reduced, and the security of the user information is improved.
  • the recognition result is that the face image of the first user is detected, and the current state is that the electronic device is in a locked state; the electronic device performs related operations, including: the electronic device calls the second camera to perform face recognition , The user desktop corresponding to the first user will be displayed if the recognition is successful.
  • the electronic device performs related operations, including: the electronic device calls the second camera to perform face recognition , The user desktop corresponding to the first user will be displayed if the recognition is successful.
  • the always-on first camera is used to sense the face image and recognize the user, thereby improving the ability of the electronic device to recognize the face image and ensuring that the same
  • the isolation of information between users of electronic devices improves the convenience for users to unlock electronic devices.
  • the second camera with high color depth and high resolution of the captured image is called to perform face recognition again. After the recognition is successful, the electronic device is unlocked, which can improve the face unlocking performance. Reliability and safety.
  • the current state is that the electronic device displays the application interface; the recognition result is that the angular deviation between the upward direction of the face image and the upward direction of the application interface is greater than or equal to the first threshold; the electronic device performs related The operation includes: adjusting the upward direction of the application interface by the electronic device, and adjusting the angular deviation between the upward direction of the application interface and the upward direction of the face image to be less than or equal to the first threshold.
  • the above process please refer to the specific description of the aforementioned face recognition example, which will not be repeated here.
  • the always-on first camera can detect the angle between the upward direction of the face image and the upward direction of the application interface in real time, and switch between horizontal and vertical screens according to the angle, thereby bringing convenience to users.
  • the always-on first camera has a low color depth and the captured image has a low resolution, carrying less information. Therefore, the damage to the security and privacy of the user information caused by the leakage of the rich information of the image captured by the first camera is reduced, and the security of the user information is improved.
  • the recognition result is that the driving environment in the car is detected, and the current state is that the electronic device is in working state; the electronic device performs related operations, including: the electronic device turns on the driving mode, and the electronic device can Broadcast incoming call information and messages, and perform related operations in response to user voice.
  • the foregoing process reference may be made to the specific description of the foregoing environment detection example, which is not repeated here.
  • the always-on first camera can detect the current environment in real time, and switch to the driving mode based on the environment being the driving position, thereby bringing convenience to the user.
  • the always-on first camera has a low color depth and the captured image has a low resolution, carrying less information. Therefore, the damage to the security and privacy of the user information caused by the leakage of the rich information of the image captured by the first camera is reduced, and the security of the user information is improved.
  • the recognition result is that a human body image is detected, and the current state is that the electronic device stores a first instruction.
  • the first instruction indicates that the human body image is detected and an alarm message is sent to the terminal; the electronic device performs related operations, Including: the electronic device sends an alarm message to the terminal.
  • the recognition result is that the fall gesture is detected, and the current state is that the electronic device stores a second instruction, and the second instruction indicates that the fall gesture is detected and an alarm message is sent to the terminal; the electronic device executes the related Operation includes: the electronic device sends an alarm message to the terminal.
  • the first camera that is always on in the home security scene can detect human body images in real time, and alarm according to the settings when the human image is detected, or alarms according to the settings when the falling posture is detected, thereby bringing convenience to users.
  • the normally-on first camera has a low color depth and the captured image has a low resolution, carrying less information. Therefore, the damage to the security and privacy of the user information caused by the leakage of the rich information of the image captured by the first camera is reduced, and the security of the user information is improved.
  • An embodiment of the present application further provides an electronic device, including: one or more processors, a memory, and a first camera, wherein: the first camera includes a first sensor, and the first sensor includes a first image-sensitive unit array The number of image sensitive units in the first image sensitive unit array is less than or equal to 40,000, the color depth of the image data output by the first sensor is a first color depth, and the first color depth is less than 8 bits;
  • the memory is used to store computer program codes, the computer program codes including computer instructions, when the one or more processors execute the computer instructions, the electronic device is caused to execute the image recognition method described in FIG. 8.
  • the electronic device may be the electronic device shown in FIG. 1 or FIG. 2.
  • the embodiment of the present application also provides a computer-readable storage medium that stores instructions in the computer-readable storage medium, and when it runs on a computer or a processor, the computer or the processor executes any of the above methods. Or multiple steps.
  • the embodiments of the present application also provide a computer program product containing instructions.
  • the computer program product runs on a computer or a processor, the computer or the processor is caused to execute one or more steps in any of the above methods.
  • all or part of the functions can be implemented by software, hardware, or a combination of software and hardware.
  • software When implemented by software, it can be implemented in the form of a computer program product in whole or in part.
  • the computer program product includes one or more computer instructions.
  • the computer may be a general-purpose computer, a special-purpose computer, a computer network, or other programmable devices.
  • the computer instructions may be stored in a computer-readable storage medium.
  • the computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server or a data center integrated with one or more available media.
  • the usable medium may be a magnetic medium (for example, a floppy disk, a hard disk, a magnetic tape), an optical medium (for example, a DVD), or a semiconductor medium (for example, a solid state disk (SSD)), etc.
  • the process can be completed by a computer program instructing relevant hardware.
  • the program can be stored in a computer readable storage medium. , May include the processes of the foregoing method embodiments.
  • the aforementioned storage media include: ROM or random storage RAM, magnetic disks or optical discs and other media that can store program codes.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Security & Cryptography (AREA)
  • Software Systems (AREA)
  • Vascular Medicine (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Signal Processing (AREA)
  • Telephone Function (AREA)
  • User Interface Of Digital Computer (AREA)
  • Studio Devices (AREA)

Abstract

本申请实施例提供一种图像识别方法和电子设备,其中,该方法包括:电子设备调用第一摄像头采集第一图像数据;第一摄像头包含第一传感器,第一传感器包含第一像敏单元阵列,第一像敏单元阵列中像敏单元的数量小于或等于40000,第一传感器输出的图像数据的色彩深度为第一色彩深度,第一色彩深度小于8位。实施本申请实施例,可以提高用户信息的安全性。

Description

图像识别方法和电子设备
本申请要求在2019年4月11日提交中国国家知识产权局、申请号为201910289400.1、发明名称为“图像识别方法和电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及电子技术领域,尤其涉及一种图像识别方法和电子设备。
背景技术
随着手机、平板、智能电视等电子设备的不断发展,这些电子设备在人们生活中使用越来越普遍。目前,电子设备上可设置常规的摄像头采集图像数据,以进行环境检测、人脸检测、人脸识别等。
然而,长时间或者过于频繁地开启电子设备上的摄像头会获取用户形象、行为、生活环境等丰富的个人特征、行为、环境等隐私信息,这些信息如果被泄露,会对用户信息的安全性和私密性造成危害。
发明内容
本申请公开了一种图像识别方法和电子设备,可提高用户信息的安全性。
第一方面,本申请实施例提供了一种图像识别方法,所述方法包括:电子设备调用第一摄像头采集第一图像数据;所述第一摄像头包含第一传感器,所述第一传感器包含第一像敏单元阵列,所述第一像敏单元阵列中像敏单元的数量小于或等于40000,所述第一传感器输出的图像数据的色彩深度为第一色彩深度,所述第一色彩深度小于8位。
上述的图像识别方法中,与现有的摄像头相比,第一摄像头具有低色彩深度且拍摄图像具有低分辨率,携带更少的信息。从而减少了第一摄像头拍摄的图像丰富的信息被泄露对用户信息的安全性和私密性造成的危害,提高用户信息的安全性。
其中,摄像头拍摄图像的分辨率由上述第一摄像头中第一像敏单元阵列中像敏单元的数量确定。可选的,像敏单元阵列是多个像敏单元排列形成的阵列,例如按照矩形均匀排列,像敏单元阵列横向上和纵向上的像敏单元数量分别为128和96,即像敏单元阵列为128×96的阵列,则第一摄像头拍摄图像的分辨率为128×96。
可选的,像敏单元阵列可排列成圆形阵列,则该圆形阵列的直径上像敏单元的数量小于或等于200,例如直径上像敏单元的数量为128。
可选的,第一传感器输出的图像数据的色彩深度为4位,第一传感器中每个像素可描述白或者三基色红、绿、蓝中某种颜色的16个等级。
在本申请的一些实施例中,所述第一摄像头在所述电子设备工作时,一直开启。常开的第一摄像头具有低色彩深度且拍摄图像具有低分辨率,携带更少的信息,提高用户信息的安全性。
在本申请的一些实施例中,所述电子设备调用第一摄像头采集第一图像数据之后,所述 方法还包括:所述电子设备对所述第一图像数据进行图像识别,并根据识别结果和当前的状态,执行相关操作。
在本申请的一些实施例中,所述第一色彩深度为4位,所述像敏单元阵列横向上和纵向上所述像敏单元的最大数量分别为128和96,或者所述像敏单元阵列横向上和纵向上所述像敏单元的最大数量分别为144和96。
在本申请的一些实施例中,所述识别结果为检测到人脸图像,当前的状态为所述电子设备处于锁定状态;所述电子设备执行相关操作,包括:所述电子设备调用第二摄像头进行人脸解锁。其中:所述第二摄像头包含第二传感器,所述第二传感器包含第二像敏单元阵列,所述第二像敏单元阵列中所述像敏单元的数量大于所述第一像敏单元阵列中所述像敏单元的数量,所述第二传感器输出的图像数据的色彩深度为第二色彩深度,所述第二色彩深度大于所述第一色彩深度。第二摄像头可以是图1所示示例中的摄像头200。
上述的方法中,将第一摄像头常开采集图像数据,而与第二摄像头相比,第一摄像头具有低色彩深度且拍摄图像具有低分辨率,携带更少的信息。从而减少了常开摄像头拍摄的图像丰富的信息被泄露对用户信息的安全性和私密性造成的危害,提高用户信息的安全性。使用第二摄像头进行人脸识别以解锁电子设备,第二摄像头具有高色彩深度且拍摄图像具有高分辨率,可在保证用户信息安全性的同时提升人脸识别的准确性。
在本申请的一些实施例中,所述识别结果为检测到人脸图像,所述当前的状态为所述电子设备处于亮屏状态;所述电子设备执行相关操作,包括:将所述电子设备的显示屏设置为常亮状态。上述方法中,常开的第一摄像头可实时检测人脸图像,检测到人脸图像则设置显示屏常亮,从而为用户带来便利性。而与第二摄像头相比,该常开的第一摄像头具有低色彩深度且拍摄图像具有低分辨率,携带更少的信息。从而减少了第一摄像头拍摄的图像丰富的信息被泄露对用户信息的安全性和私密性造成的危害,提高用户信息的安全性。
在本申请的一些实施例中,所述识别结果为检测到第一用户的人脸图像,所述当前的状态为所述电子设备处于锁定状态;所述电子设备执行相关操作,包括:所述电子设备调用第二摄像头进行人脸识别,识别成功则显示所述第一用户对应的用户桌面。上述使用第一摄像头和第二摄像头配合实现电子设备解锁的方案中,利用常开的第一摄像头感知人脸图像并识别用户,从而可提升电子设备人脸图像识别的能力,也可保证使用同一电子设备的用户之间信息的隔离性,提高用户解锁电子设备的便利性。另外,利用第一摄像头执行人脸识别成功后,再调用具有高色彩深度且拍摄图像具有高分辨率的第二摄像头再次执行人脸识别,识别成功后才解锁电子设备,可提高人脸解锁的可靠性和安全性。
其中,与第一摄像头相比,第二摄像头传感器输出的图像数据具有高色彩深度且拍摄图像具有高分辨率,在被调用时才开启并采集图像数据。电子设备处于锁定状态时,对用户操作电子设备的权限进行限定。在解锁电子设备之后,用户才能够对电子设备进行操作。
可选的,识别结果为检测到手势图像,所述当前的状态为所述电子设备处于通话状态;所述电子设备执行相关操作,包括:所述电子设备关闭或开启麦克风。其中,电子设备中保存有手势图像与关闭麦克风对应关系,或者保存有手势图像与开启麦克风对应关系。
示例性的,电子设备中保存有手掌展开到握紧这一手势图像与关闭麦克风对应关系,并保存有手掌握紧到展开这一手势图像与开启麦克风对应关系。
在本申请的一些实施例中,所述当前状态为所述电子设备显示应用界面;所述识别结果为检测到人脸图像的上方向与所述应用界面的上方向之间角度偏差大于或等于第一阈值;所 述电子设备执行相关操作,包括:所述电子设备调整所述应用界面的上方向,将所述应用界面的上方向与所述人脸图像的上方向之间角度偏差调整到小于或等于所述第一阈值。上述方法中,常开的第一摄像头可实时检测人脸图像的上方向和应用界面的上方向之间的夹角,根据该夹角进行横竖屏切换,从而为用户带来便利性。而与第二摄像头相比,该常开的第一摄像头具有低色彩深度且拍摄图像具有低分辨率,携带更少的信息。从而减少了第一摄像头拍摄的图像丰富的信息被泄露对用户信息的安全性和私密性造成的危害,提高用户信息的安全性。
示例性的,第一阈值为45度。
在本申请的一些实施例中,所述识别结果为检测到车内驾驶位环境,所述当前的状态为所述电子设备处于工作状态;所述电子设备执行相关操作,包括:所述电子设备开启驾驶模式,所述驾驶模式下所述电子设备能够播报来电信息和消息,且能够响应用户语音执行相关操作。上述方法中,常开的第一摄像头可实时检测当前所处的环境,根据该环境为驾驶位环境切换为驾驶模式,从而为用户带来便利性。而与第二摄像头相比,该常开的第一摄像头具有低色彩深度且拍摄图像具有低分辨率,携带更少的信息。从而减少了第一摄像头拍摄的图像丰富的信息被泄露对用户信息的安全性和私密性造成的危害,提高用户信息的安全性。
在本申请的一些实施例中,所述第一像敏单元阵列的面积和所述第二像敏单元阵列的面积相同的情况下,所述第一像敏单元阵列上像敏单元的密度小于所述第二像敏单元阵列上像敏单元的密度;
所述第一像敏单元阵列上所述像敏单元的密度和所述第二像敏单元阵列上所述像敏单元的密度相同的情况下,所述第一像敏单元阵列的面积小于所述第二像敏单元阵列的面积。
在本申请的一些实施例中,所述第一传感器包含第一模数转换电路,所述第一模数转换电路中模拟侧电路的检测精度与所述第一色彩深度相关;所述第二摄像头包含第二传感器,所述第二传感器包含第二模数转换电路,所述第二传感器输出的图像数据的色彩深度为第二色彩深度,所述第二模数转换电路中模拟侧电路的检测精度与所述第二色彩深度相关;所述第一模数转换电路中模拟侧电路的检测精度小于所述第二模数转换电路中模拟侧电路的检测精度,所述第一色彩深度小于所述第二色彩深度。
示例性的,可将模拟侧电路的电平检测精度从1/256降低到1/16,以降低摄像头100传感器输出的图像数据的色彩深度。
在本申请的一些实施例中,所述识别结果为检测到人体图像,所述当前的状态为所述电子设备存储有第一指令,所述第一指令指示检测到所述人体图像则向终端发送报警消息;所述电子设备执行相关操作,包括:所述电子设备向所述终端发送所述报警消息。
在本申请的一些实施例中,所述识别结果为检测到摔倒姿态,所述当前的状态为所述电子设备存储有第二指令,所述第二指令指示检测到所述摔倒姿态则向终端发送报警消息;所述电子设备执行相关操作,包括:所述电子设备向所述终端发送所述报警消息。
上述方法中,在家庭安防场景下常开的第一摄像头可实时检测人体图像,检测到人体图像则根据设置报警,或者检测到摔倒姿态则根据设置报警,从而为用户带来便利性。与现有的摄像头相比,该常开的第一摄像头具有低色彩深度且拍摄图像具有低分辨率,携带更少的信息。从而减少了第一摄像头拍摄的图像丰富的信息被泄露对用户信息的安全性和私密性造成的危害,提高用户信息的安全性。
第二方面,本申请提供了一种电子设备,包括:一个或多个处理器、存储器和第一摄像 头,其中:所述第一摄像头包含第一传感器,所述第一传感器包含第一像敏单元阵列,所述第一像敏单元阵列中像敏单元的数量小于或等于40000,所述第一传感器输出的图像数据的色彩深度为第一色彩深度,所述第一色彩深度小于8位;所述存储器用于存储计算机程序代码,所述计算机程序代码包括计算机指令,当所述一个或多个处理器执行所述计算机指令时,使得所述电子设备执行第一方面或第一方面任一种可能的实施方式所述的图像识别方法。
第三方面,本申请提供了一种计算机存储介质,包括计算机指令,当所述计算机指令在电子设备上运行时,使得所述电子设备执行如第一方面或者第一方面任一种可能的实施方式提供的方法。
第四方面,本申请实施例提供一种计算机程序产品,当计算机程序产品在计算机上运行时,使得计算机执行如第一方面或者第一方面任一种可能的实施方式提供的方法。
可以理解地,上述提供的第二方面所述的电子设备、第三方面所述的计算机存储介质或者第四方面所述的计算机程序产品均用于执行第一方面或者第一方面任一种可能的实施方式所提供的方法。因此,其所能达到的有益效果可参考对应方法中的有益效果,此处不再赘述。
附图说明
下面对本申请实施例用到的附图进行介绍。
图1是本申请实施例提供的一种电子设备上摄像头的结构示意图;
图2是本申请实施例提供的一种电子设备10的结构示意图;
图3是本申请实施例提供的一种摄像头100的硬件模块400和软件模块500的结构示意图;
图4是本申请实施例提供的一种横竖屏切换的示意图;
图5是本申请实施例提供的一种摄像头100拍摄的图像示意图;
图6是本申请实施例提供的另一种智能摄像头的硬件模块600和软件模块700的结构示意图;
图7是本申请实施例提供的一种智能摄像头的结构示意图;
图8是本申请实施例提供的一种图像识别方法的流程示意图。
具体实施方式
下面结合本申请实施例中的附图对本申请实施例进行描述。本申请实施例的实施方式部分使用的术语仅用于对本申请的具体实施例进行解释,而非旨在限定本申请。
下面介绍本申请实施例涉及的应用场景。电子设备中配置了摄像头、麦克风、全球定位系统(global positioning system,GPS)芯片、各类传感器(例如磁场传感器、重力传感器、陀螺仪传感器等)等器件,用于感知外部的环境、用户的动作等。根据感知到的外部的环境和用户的动作,电子设备向用户提供个性化的、情景化的业务体验。其中,摄像头能够获取丰富、准确的信息使得电子设备感知外部的环境、用户的动作。
为感知外部的环境、用户的动作,电子设备上的摄像头可在电子设备开机状态下常开。其中,摄像头常开是指,在电子设备开机工作状态下,摄像头无需被调用就一直处于工作状态,可采集图像数据,电子设备可根据采集的图像数据进行图像识别。本申请实施例中,电子设备在开机状态下显示屏可以是息屏状态也可以是亮屏状态。具体的,电子设备开机后在息屏状态下,电子设备仍然能够接收消息(如即时应用消息),也能够执行定位、计步等功能, 此时电子设备中常开的摄像头也处于工作状态来采集图像数据,电子设备可根据采集的图像数据进行图像识别,使得电子设备感知外部的环境、用户的动作等。电子设备开机后在亮屏状态下,常开的摄像头类似的也处于工作状态来采集图像数据。
下面给出利用常开的摄像头向用户提供个性化的、情景化的业务体验的示例。
示例一:电子设备的前置摄像头常开,在检测到人脸时,即输出分析结果使得电子设备的显示屏常亮,从而实现用户观看电子设备显示屏时保持显示屏常亮。
示例二:前置摄像头常开,在检测到人脸时,根据前置摄像头检测到人脸图像的拍摄方向判断用户横屏使用还是竖屏使用电子设备。在判断结果是用户横屏使用电子设备时进行横屏显示,判断结果是用户竖屏使用电子设备时进行竖屏显示。
示例三:摄像头常开检测图像数据,电子设备根据检测到的图像数据确定当前所处的环境为车内驾驶位环境,则调节电子设备为驾驶模式。驾驶模式下,电子设备可显示导航页面执行导航功能,也可响应于用户的语音指令执行拨号功能或者播放音频功能。示例性的,电子设备在驾驶模式下,可自动播报来电或者短信,不需要用户手动去操作。另外还有语音功能,用户点击电子设备上按钮说出语音电子设备即可执行,比如“播放音乐”,“给某某打电话。”
示例四:电子设备的显示屏在息屏状态下,前置摄像头常开。在前置摄像头检测到人脸图像数据时,电子设备根据采集到的人脸图像数据进行人脸识别,人脸识别成功后解锁电子设备。电子设备解锁后,电子设备可显示桌面,桌面包含应用图标。应用图标可以响应用户的操作来调用应用,例如“相机”、“音乐”和“视频”等应用。
可以理解的,上述对常开的摄像头的应用场景举例仅用于解释本申请实施例,不应构成限定,常开的摄像头还可以用于其他场景,本申请实施例对此不作限定。
本申请实施例提供一种摄像头,该摄像头可应用在电子设备上。电子设备在开机状态下,该摄像头常开。电子设备可以实现为以下任意一种包含摄像头的设备:手机、平板电脑(pad)、便携式游戏机、掌上电脑(personal digital assistant,PDA)、笔记本电脑、超级移动个人计算机(ultra mobile personal computer,UMPC)、手持计算机、上网本、车载媒体播放设备、可穿戴电子设备、虚拟现实(virtual reality,VR)终端设备、增强现实(augmented reality,AR)终端设备等数显产品。
请参阅图1,图1是本申请实施例提供的一种电子设备上摄像头的结构示意图。如图1所示,电子设备10上可包含摄像头100、摄像头200和显示屏300。其中,摄像头100具有低色彩深度且拍摄的图像具有低分辨率。在电子设备10开机状态下该摄像头100常开。摄像头100用于感知外部的环境、用户的动作。摄像头200可用于采集人脸图像数据进行人脸识别,进而实现电子设备10解锁、身份验签、应用解锁等。与摄像头100相比,摄像头200传感器输出的图像数据具有更高的色彩深度且拍摄的图像具有更高的分辨率。
示例性的,摄像头100拍摄图像的分辨率为200×200,即拍摄的图像每边像素值为200。摄像头200拍摄图像的分辨率为2560×1920,即拍摄的图像宽度方向的像素值为2560,高度方向的像素值为1920。摄像头100传感器输出的图像数据的色彩深度为4位,每个像素可输出白或三基色(红、绿、蓝)中某一种的16个等级(2的4次方)的图像数据。摄像头200传感器输出的图像数据的色彩深度为8位,每个像素可输出白或三基色(红、绿、蓝)中某一种的256个等级(2的8次方)的图像数据。
上述电子设备10中,利用常开的摄像头100感知外部的环境、用户的动作,从而可提升电子设备感知环境的能力,提高用户使用电子设备的便利性。与摄像头200相比,常开的摄像头100具有低色彩深度且拍摄的图像具有低分辨率,可减少拍摄的图像上丰富的信息被泄露对用户信息的安全性和私密性造成的危害,提高用户信息的安全性。
下面介绍本申请实施例涉及的概念。
(1)分辨率
对于同样焦距摄像头来说,像素越多的摄像头输出的图像包含的细节就越多,图像越清晰。图像分辨率可表示为每一个方向上的像素数量。例如分辨率640×480,表示摄像头拍摄的图像宽度方向的像素数量为640,高度方向的像素数量为480,可由307200个像素(约为30万像素)的摄像头拍摄得到。再例如,一张分辨率为1600×1200的图像,可由像素数量为1920000的摄像头拍摄得到。
本申请实施例中,摄像头拍摄图像的分辨率由摄像头中像敏单元阵列中像敏单元的数量确定。关于像敏单元阵列的描述可参考图3所描述示例介绍。示例性的,如果摄像头的像敏单元阵列为128×96的阵列,则摄像头拍摄图像的分辨率为128×96。
(2)色彩深度
色彩深度又称色彩位数,以二进制的位(bit)为单位,表示记录色调的数量。色彩深度越大的图像,具有越大的色彩范围,图像越能更精细地还原真实景象每种颜色亮部及暗部的细节。例如,色彩深度为24位的图像,理论上可表示16777216(2的24次方)种颜色。
具有某一色彩深度的图像数据是使用摄像头的传感器输出的另一色彩深度的图像数据通过去马赛克算法算得,摄像头传感器输出的图像数据的色彩深度决定了图像的色彩深度。一般摄像头传感器输出的图像数据至少为8位的色彩深度,即把白色或三基色(红、绿、蓝)中的某一种颜色分为(2的8次方)256个不同的等级。因此,摄像头传感器输出的图像数据的色彩深度越大,拍摄的图像的色彩深度就越大,图像就越能真实地还原色彩,图像携带越多被拍对象的信息。
图2是本申请实施例提供的一种电子设备10的结构示意图。
电子设备10可以包括处理器110,外部存储器接口120,内部存储器121,通用串行总线(universal serial bus,USB)接口130,充电管理模块140,电源管理模块141,电池142,天线1,天线2,移动通信模块150,无线通信模块160,音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,传感器模块180,按键190,马达191,指示器192,摄像头193,显示屏194,以及用户标识模块(subscriber identification module,SIM)卡接口195等。其中传感器模块180可以包括压力传感器180A,陀螺仪传感器180B,气压传感器180C,磁传感器180D,加速度传感器180E,距离传感器180F,接近光传感器180G,指纹传感器180H,温度传感器180J,触摸传感器180K,环境光传感器180L,骨传导传感器180M等。
可以理解的是,本发明实施例示意的结构并不构成对电子设备10的具体限定。在本申请另一些实施例中,电子设备10可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。
处理器110可以包括一个或多个处理单元,例如:处理器110可以包括应用处理器 (application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,存储器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。
其中,控制器可以是电子设备10的神经中枢和指挥中心。控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。
处理器110中还可以设置存储器,用于存储指令和数据。
在一些实施例中,处理器110可以包括一个或多个接口。接口可以包括集成电路(inter-integrated circuit,I2C)接口,集成电路内置音频(inter-integrated circuit sound,I2S)接口,脉冲编码调制(pulse code modulation,PCM)接口,通用异步收发传输器(universal asynchronous receiver/transmitter,UART)接口,移动产业处理器接口(mobile industry processor interface,MIPI),通用输入输出(general-purpose input/output,GPIO)接口,用户标识模块(subscriber identity module,SIM)接口,和/或通用串行总线(universal serial bus,USB)接口等。
可以理解的是,本发明实施例示意的各模块间的接口连接关系,只是示意性说明,并不构成对电子设备10的结构限定。在本申请另一些实施例中,电子设备10也可以采用上述实施例中不同的接口连接方式,或多种接口连接方式的组合。
充电管理模块140用于从充电器接收充电输入。其中,充电器可以是无线充电器,也可以是有线充电器。
电源管理模块141用于连接电池142,充电管理模块140与处理器110。电源管理模块141接收电池142和/或充电管理模块140的输入,为处理器110,内部存储器121,外部存储器,显示屏194,摄像头193,和无线通信模块160等供电。
电子设备10的无线通信功能可以通过天线1,天线2,移动通信模块150,无线通信模块160,调制解调处理器以及基带处理器等实现。
天线1和天线2用于发射和接收电磁波信号。
移动通信模块150可以提供应用在电子设备10上的包括2G/3G/4G/5G等无线通信的解决方案。移动通信模块150可以包括至少一个滤波器,开关,功率放大器,低噪声放大器(low noise amplifier,LNA)等。
调制解调处理器可以包括调制器和解调器。其中,调制器用于将待发送的低频基带信号调制成中高频信号。解调器用于将接收的电磁波信号解调为低频基带信号。随后解调器将解调得到的低频基带信号传送至基带处理器处理。低频基带信号经基带处理器处理后,被传递给应用处理器。
在一些实施例中,电子设备10的天线1和移动通信模块150耦合,天线2和无线通信模块160耦合,使得电子设备10可以通过无线通信技术与网络以及其他设备通信。
电子设备10通过GPU,显示屏194,以及应用处理器等实现显示功能。GPU为图像处理的微处理器,连接显示屏194和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器110可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。
显示屏194用于显示图像,视频等。在一些实施例中,电子设备10可以包括1个或N个显示屏194,N为大于1的正整数。
电子设备10可以通过ISP,摄像头193,视频编解码器,GPU,显示屏194以及应用处理器等实现拍摄功能。
ISP用于处理摄像头193反馈的数据。例如,拍照时,打开快门,光线通过镜头被传递到摄像头感光元件上,光信号转换为电信号,摄像头感光元件将所述电信号传递给ISP处理,转化为肉眼可见的图像。ISP还可以对图像的噪点,亮度,肤色进行算法优化。ISP还可以对拍摄场景的曝光,色温等参数优化。在一些实施例中,ISP可以设置在摄像头193中。
摄像头193用于捕获静态图像或视频。物体通过镜头生成光学图像投射到感光元件。该感光元件即为图3所描述的传感器420中的像敏单元阵列或者图6所描述的传感器630中的像敏单元阵列。感光元件可以是电荷耦合器件(charge coupled device,CCD)或互补金属氧化物半导体(complementary metal-oxide-semiconductor,CMOS)光电晶体管。感光元件把光信号转换成电信号,之后将电信号传递给ISP转换成数字图像信号。ISP将数字图像信号输出到DSP加工处理。DSP将数字图像信号转换成标准的RGB,YUV等格式的图像信号。在一些实施例中,电子设备10可以包括1个或N个摄像头193,N为大于1的正整数。N个摄像头193可包含图1所示出的摄像头100和摄像头200。
其中,图3所描述的摄像头100中的传感器420和图6所描述的传感器630,不包含在传感器模块180中,图3所描述的传感器420和图6所描述的传感器630是摄像头的组成部分。图3所描述的传感器420中的像敏单元阵列和图6所描述的传感器630中的像敏单元阵列即前述摄像头193中的感光元件。传感器420中的像敏单元阵列和传感器630中的像敏单元阵列例如是电荷耦合器件CCD图像传感器,或者CMOS图像传感器。
数字信号处理器用于处理数字信号,除了可以处理数字图像信号,还可以处理其他数字信号。例如,当电子设备10在频点选择时,数字信号处理器用于对频点能量进行傅里叶变换等。
视频编解码器用于对数字视频压缩或解压缩。电子设备10可以支持一种或多种视频编解码器。这样,电子设备10可以播放或录制多种编码格式的视频,例如:动态图像专家组(moving picture experts group,MPEG)1,MPEG2,MPEG3,MPEG4等。
NPU为神经网络(neural-network,NN)计算处理器,通过借鉴生物神经网络结构,例如借鉴人脑神经元之间传递模式,对输入信息快速处理,还可以不断的自学习。通过NPU可以实现电子设备10的智能认知等应用,例如:图像识别,人脸识别,语音识别,文本理解等。例如实现本申请实施例中人脸检测示例、人脸识别示例、手势检测示例和环境检测示例。
外部存储器接口120可以用于连接外部存储卡,例如Micro SD卡,实现扩展电子设备10的存储能力。外部存储卡通过外部存储器接口120与处理器110通信,实现数据存储功能。例如将音乐,视频等文件保存在外部存储卡中。
内部存储器121可以用于存储计算机可执行程序代码,所述可执行程序代码包括指令。处理器110通过运行存储在内部存储器121的指令,从而执行电子设备10的各种功能应用以及数据处理。内部存储器121可以包括存储程序区和存储数据区。此外,内部存储器121可以包括高速随机存取存储器,还可以包括非易失性存储器。
电子设备10可以通过音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,以及应用处理器等实现音频功能。例如音乐播放,录音等。
音频模块170用于将数字音频信息转换成模拟音频信号输出,也用于将模拟音频输入转换为数字音频信号。音频模块170还可以用于对音频信号编码和解码。
扬声器170A,也称“喇叭”,用于将音频电信号转换为声音信号。电子设备10可以通过扬声器170A收听音乐,或收听免提通话。
受话器170B,也称“听筒”,用于将音频电信号转换成声音信号。当电子设备10接听电话或语音信息时,可以通过将受话器170B靠近人耳接听语音。
麦克风170C,也称“话筒”,“传声器”,用于将声音信号转换为电信号。
耳机接口170D用于连接有线耳机。
压力传感器180A用于感受压力信号,可以将压力信号转换成电信号。在一些实施例中,压力传感器180A可以设置于显示屏194。
陀螺仪传感器180B可以用于确定电子设备10的运动姿态。在一些实施例中,可以通过陀螺仪传感器180B确定电子设备10围绕三个轴(即,x,y和z轴)的角速度。
气压传感器180C用于测量气压。在一些实施例中,电子设备10通过气压传感器180C测得的气压值计算海拔高度,辅助定位和导航。
磁传感器180D包括霍尔传感器。电子设备10可以利用磁传感器180D检测翻盖皮套的开合。
加速度传感器180E可检测电子设备10在各个方向上(一般为三轴)加速度的大小。当电子设备10静止时可检测出重力的大小及方向。
距离传感器180F,用于测量距离。
接近光传感器180G可以包括例如发光二极管(LED)和光检测器,例如光电二极管。
环境光传感器180L用于感知环境光亮度。
指纹传感器180H用于采集指纹。电子设备10可以利用采集的指纹特性实现指纹解锁,访问应用锁,指纹拍照,指纹接听来电等。
温度传感器180J用于检测温度。在一些实施例中,电子设备10利用温度传感器180J检测的温度,执行温度处理策略。
触摸传感器180K,也称“触控面板”。触摸传感器180K可以设置于显示屏194,由触摸传感器180K与显示屏194组成触摸屏,也称“触控屏”。
骨传导传感器180M可以获取振动信号。在一些实施例中,骨传导传感器180M可以获取人体声部振动骨块的振动信号。
按键190包括开机键,音量键等。按键190可以是机械按键。也可以是触摸式按键。电子设备10可以接收按键输入,产生与电子设备10的用户设置以及功能控制有关的键信号输入。
马达191可以产生振动提示。
指示器192可以是指示灯,可以用于指示充电状态,电量变化,也可以用于指示消息,未接来电,通知等。
SIM卡接口195用于连接SIM卡。
下面介绍本申请实施例提供的一种摄像头100的硬件模块和软件模块的结构示例。摄像头100即图1所描述示例中的常开摄像头100,可应用在的手机等电子设备10上。请参阅图3,图3是本申请实施例提供的一种摄像头100的硬件模块400和软件模块500的结构示意图。如图3所示,摄像头100的硬件模块400包含镜头组410、传感器420和图像信号处理单元430。其中:
镜头组410可包含一个或多个透镜,用于对进入的光线折射,以在传感器420上成像。
传感器420用于实现光信号到电信号的转换,将来自镜头组的光信号转换为电信号传输给图像信号处理单元430。
具体的,传感器420可包括像敏单元阵列和模数转换(analog to digital,A/D)电路。其中,像敏单元阵列包含多个像敏单元,像敏单元阵列作接收来自镜头组光线所成的图像,每个像敏单元将其上所成图像转换成的电信号。该电信号是模拟形式的信号。像敏单元的数量确定摄像头100的像素,即确定摄像头100拍摄图像的分辨率。模数转换电路用于将各个像素单元输出模拟形式的电信号转换为数字信号。模数转换电路的设计确定摄像头传感器输出的图像数据的色彩深度。示例性的,像敏单元阵列是多个像敏单元排列形成的阵列,例如按照矩形均匀排列,像敏单元阵列横向上和纵向上的像敏单元数量分别为128和96,则摄像头100拍摄图像的分辨率为128×96。再例如,像敏单元阵列可排列成圆形阵列,则该圆形阵列的直径上像敏单元的数量小于或等于200,例如直径上像敏单元的数量为128。
其中,本申请实施例中的摄像头100与摄像头200相比具有低色彩深度且拍摄的图像具有低分辨率。示例性的,摄像头100拍摄图像的分辨率为128×96,摄像头传感器输出的图像数据的色彩深度为4位,每个像素可描述白或者三基色(红、绿、蓝)中某种颜色的16个等级。本申请实施例中,与摄像头200相比,传感器420使得输出的图像数据具有低色彩深度且拍摄的图像具有低分辨率。另外,摄像头100拍摄图像的分辨率还可以是x×y,x和y均为小于或等于200的正整数。色彩深度小于8位。
本申请实施例中,摄像头100具有的色彩深度是指,摄像头100中的传感器输出的图像数据的色彩深度。示例性的,摄像头100具有的色彩深度为4位,即摄像头100中的传感器输出的图像数据的色彩深度为4位。
本申请实施例中,摄像头100为第一摄像头,图3中的传感器420为第一传感器,传感器420包含第一像敏单元阵列,第一像敏单元阵列横向上和纵向上像敏单元的最大数量小于或等于200,第一传感器输出的图像数据的色彩深度为第一色彩深度,第一色彩深度小于8位。另外,摄像头200为第二摄像头,第二摄像头包含第二传感器,第二传感器包含第二像敏单元阵列,第二像敏单元阵列中像敏单元的数量大于第一像敏单元阵列中像敏单元的数量,第二传感器输出的图像数据的色彩深度为第二色彩深度,第二色彩深度大于第一色彩深度。
下面介绍两种获得低分辨率的摄像头100的方式。实现方式一:在现有的高分辨率的摄像头的传感器(例如图1所示示例中摄像头200包含的传感器)的基础上缩小像敏单元阵列的面积,并保持现有的像敏单元的密度不变,从而减小像敏单元的数量以减小摄像头的像素,以实现低分辨率的摄像头100。实现方式二:在现有的高分辨率的摄像头的传感器(例如图1所示示例中摄像头200包含的传感器)的基础上保持像敏单元阵列的面积不变,而增大像敏单元之间的间距,从而减小像敏单元的数量以实现分辨率低的摄像头100。
通过简化A/D电路可降低摄像头100传感器输出的图像数据的色彩深度。具体的,可降低A/D电路中模拟侧电路的电平检测精度来得到色彩深度较低的摄像头100。示例性的,可将模拟侧电路的电平检测精度从1/256降低到1/16,以降低摄像头100传感器输出的图像数据的色彩深度。本申请实施例中,通过上述简化A/D电路的方式可得到色彩深度为4位的摄像头100。
图像处理单元430用于从传感器420接收数字形式的图像数据,实现图像编码。具体的,图像处理单元430对接收到的图像数据进行去马赛克处理、自动曝光控制(automatic exposure  control,AEC)、自动增益控制(automatic gain control,AGC)、自动白平衡(automatic white balance,AWB)、色彩校正等处理。本申请实施例中,去马赛克处理将每个像敏单元输出的4位的图像数据结合周边若干个其他像敏单元输出的图像数据,计算得出8位的RGB图像数据。即8位为对应图像的色彩深度。图像处理单元430可输出图像数据给软件模块500。图像处理单元430可由图2所示电子设备中的ISP实现。
如图3所示,软件模块500,接收来自图像处理单元430的图像数据,来实现人脸检测、人脸识别、手势检测和环境检测等。其中,人脸检测用来确定硬件模块400检测的图像数据是否为人脸图像数据。人脸识别用于识别硬件模块400检测到的人脸图像数据所属的用户。手势检测用于确定硬件模块400检测的图像数据是否为手势图像数据。环境检测用于根据硬件模块400检测的图像数据确定当前所处的环境。软件模块500在进行人脸检测、人脸识别、手势检测和环境检测后输出分析结果,电子设备10根据分析结果执行相关操作。下面分别对人脸检测、人脸识别、手势检测和环境检测输出分析结果后执行的操作举例说明。
①人脸检测示例
示例性的,电子设备10通过摄像头100采集图像数据,并将采集的图像数据的特征与预存的人脸特征进行匹配,匹配成功则输出分析结果指示有人脸。当电子设备10通过摄像头100检测到有人脸时,电子设备10保持显示屏常亮状态。如果当前电子设备10处于亮屏状态,电子设备可检测当前显示屏是否被设置为常亮状态。如果检测到当前显示屏未被设置为常亮状态,则电子设备10可根据该分析结果设置显示屏常亮。显示屏被设置为常亮状态后,即使未接收到操作(例如触摸操作),显示屏仍然持续被点亮的状态,直到电子设备的显示屏被关闭常亮状态。电子设备10可以以一定频率对摄像头100采集的图像数据进行人脸检测,当分析结果指示“不是人脸图像”或者指示“是天花板图像”或者指示其他图像时,电子设备10可关闭显示屏的常亮状态。显示屏被关闭常亮状态后,持续一段时间(如1分钟)未接收到操作(例如触摸操作)后,显示屏息屏直到被重新唤醒,例如检测到开机键的单击操作,则显示屏被重新唤醒点亮以进行显示。当分析结果指示“人脸图像”时,如果电子设备10检测到当前显示屏已被设置为常亮状态,则结束。可选的,电子设备10可以在亮屏达到预设时间时,检测是否有人脸,若检测到有人脸,则不息屏,即不执行息屏操作;若未检测到人脸,则息屏。
上述方法中,与摄像头200相比,常开的摄像头100可实时检测人脸图像,检测到人脸图像则设置显示屏常亮,从而为用户带来便利性。而与摄像头200相比,该常开的摄像头100具有低色彩深度且拍摄图像具有低分辨率,携带更少的信息。从而减少了摄像头100拍摄的图像丰富的信息被泄露对用户信息的安全性和私密性造成的危害,提高用户信息的安全性。
另外,当电子设备10进行人脸检测得到分析结果指示有人脸时,如果当前电子设备10处于锁定状态,电子设备10还可调用具有高色彩深度且拍摄图像具有高分辨率的摄像头200采集人脸图像数据进行人脸识别。根据摄像头200采集的人脸图像数据进行人脸识别,识别成功则解锁电子设备10,显示用户桌面。用户桌面上可包含应用图标。
摄像头200具有高色彩深度是指,与摄像头100相比,摄像头200中的传感器输出的图像数据的色彩深度更高。示例性的,摄像头200具有的色彩深度为8位,即摄像头200中的传感器输出的图像数据的色彩深度为8位。摄像头200拍摄图像具有高分辨率,是指与摄像头100相比,摄像头200的传感器中,像敏单元阵列的数量多,示例性的,像素数量为1920000。
上述使用常开摄像头100和摄像头200配合实现电子设备10解锁的方案中,将摄像头 100常开采集图像数据,而与摄像头200相比,摄像头100具有低色彩深度且拍摄的图像具有低分辨率,携带更少的信息,从而减少了常开摄像头拍摄的图像丰富的信息被泄露对用户信息的安全性和私密性造成的危害,提高用户信息的安全性。另外,使用摄像头200进行人脸识别以解锁电子设备,摄像头200具有高色彩深度且拍摄的图像具有高分辨率,可在保证用户信息安全性的同时提升人脸识别的准确性。
在本申请的一些实施例中,在调用摄像头100检测到人脸时,摄像头100可检测人脸图像的上方向。电子设备可根据该人脸图像的上方向判断用户观看电子设备的方向。如果电子设备显示应用界面,例如即时通信应用界面或者电子书阅读应用界面。当检测到应用界面的上方向与用户人脸图像的上方向偏差超过一定阈值(例如左右各45度),电子设备可根据用户人脸图像的上方向调整应用界面显示方向,以确保用户人脸图像的上方向与应用界面显示上方向保持在阈值之内。
示例性的,请参阅图4,图4是本申请实施例提供的一种横竖屏切换的示意图。如图4中的(a)所示,电子设备10的显示区域300显示应用界面1400,且应用界面1400的上方向平行于电子设备10的长边。电子设备10利用摄像头100采集人脸图像1500,并检测该人脸图像1500的上方向与应用界面1400的上方向之间的夹角是否超过第一阈值,例如45度。如图4中的(a)所示,当该人脸图像1500的上方向与应用界面1400的上方向之间的夹角不超过该第一阈值时,例如夹角为0度,电子设备10仍然竖屏显示应用界面1400。
如图4中的(b)所示,当用户改变电子设备的握持观看方向时,电子设备10利用摄像头100采集人脸图像1500,检测人脸图像1500的上方向,平行于电子设备10的短边。电子设备10检测到该人脸图像1500的上方向与应用界面1400的上方向之间的夹角为90度,超过该第一阈值45度。则如图4中的(c)所示,电子设备10可根据用户人脸图像1500的上方向调整应用界面1400显示方向,以确保用户人脸图像1500的上方向与应用界面1400的上方向保持在该阈值之内。可以理解的,上述对第一阈值是45度的举例仅用于解释本申请实施例,不应构成限定,该第一阈值还可以是更大或更小的角度。
上述的自动进行横竖屏切换的方法中,常开的摄像头100可实时检测人脸图像1500的上方向和应用界面1400的上方向之间的夹角,根据该夹角进行横竖屏切换,从而为用户带来便利性。而与摄像头200相比,该常开的摄像头100具有低色彩深度且拍摄图像具有低分辨率,携带更少的信息。从而减少了摄像头100拍摄的图像丰富的信息被泄露对用户信息的安全性和私密性造成的危害,提高用户信息的安全性。
②人脸识别示例
电子设备10可预存用户A和B,以及A对应的人脸特征和B对应的人脸特征。对于每个用户来说,电子设备10(如手机)对应一个用户桌面。如果电子设备10经过人脸识别输出分析结果指示“A的人脸图像”,电子设备10处于锁定状态,电子设备10在解锁后显示A对应的用户桌面。在本申请的一些实施例中,电子设备10执行以下步骤来解锁并显示用户桌面。
步骤101、电子设备10对摄像头100采集的图像数据进行人脸识别,输出分析结果指示“A的人脸图像”。
具体的,电子设备10将摄像头100采集的图像数据的特征分别与A的人脸特征和B的人脸特征进行匹配,摄像头100采集的图像数据的特征与A的人脸特征匹配成功,则输出分析结果指示“A的人脸图像”。摄像头100采集的图像数据的特征与B的人脸特征匹配成功, 则输出分析结果指示“B的人脸图像”。
其中,用户A可以是第一用户,用户B可以是第二用户。
步骤102、当电子设备10进行人脸识别结果指示“A的人脸图像”时,电子设备10调用高色彩深度且拍摄图像具有高分辨率的摄像头200采集人脸图像数据。
其中,摄像头100是常开的,摄像头200传感器输出的图像数据具有高色彩深度且拍摄图像具有高分辨率,在被调用时才开启并采集图像数据。当电子设备10进行人脸识别结果指示“A的人脸图像”时,电子设备10如果在锁定状态,再调用高色彩深度且拍摄图像具有高分辨率的摄像头200执行人脸解锁。其中,电子设备10处于锁定状态时,对用户操作电子设备的权限进行限定。在解锁电子设备10之后,用户才能够对电子设备10进行操作。解除锁定状态后,电子设备可以响应用户的操作来调用各类应用,例如“相机”、“音乐”和“视频”等APP。
步骤103、电子设备10根据摄像头200采集的人脸图像数据进行人脸识别。
步骤104、如果根据摄像头200采集的人脸图像数据进行人脸识别成功,则解锁电子设备10,显示A对应的用户桌面。
本申请实施例中,根据摄像头200采集的人脸图像数据进行人脸识别的过程可类比摄像头100,这里不再赘述。即当电子设备10通过摄像头200采集的人脸图像进行人脸识别得到结果指示“A的人脸图像”,表明根据摄像头200采集的人脸图像数据进行人脸识别成功。
上述使用常开摄像头100和摄像头200配合实现电子设备10解锁的方案中,利用常开的摄像头100感知人脸图像并识别用户,从而可提升电子设备人脸图像识别的能力,也可保证使用同一电子设备10的用户A和B之间信息的隔离性,提高用户解锁电子设备的便利性。
另外,利用摄像头100执行人脸识别成功后,再调用具有高色彩深度且拍摄图像具有高分辨率的摄像头200再次执行人脸识别,识别成功后才解锁电子设备,可提高人脸解锁的可靠性和安全性。
将摄像头100常开采集图像数据,而与摄像头200相比,摄像头100具有低色彩深度且拍摄图像具有低分辨率,携带更少的信息。从而减少了常开摄像头拍摄的图像丰富的信息被泄露对用户信息的安全性和私密性造成的危害,提高用户信息的安全性。另外,使用摄像头200进行人脸识别以解锁电子设备,摄像头200具有高色彩深度且拍摄图像具有高分辨率,可在保证用户信息安全性的同时提升人脸识别的准确性。
本申请实施例中,具有低色彩深度且拍摄的图像具有低分辨率的摄像头100可以对人脸进行识别,该人脸识别用于用户对安全要求不高的场景,例如非支付场景。具体的如步骤102~步骤104示出的根据人脸识别结果显示对应用户桌面的场景。再例如后文中,根据人脸识别结果播放老人或小孩对应的播放列表的场景。摄像头100能识别一些差异度比较明显的人脸。例如图5示出示例中区分出圆脸、瓜子脸或者国字脸等示例。
示例性的,请参阅图5,图5是本申请实施例提供的一种摄像头100拍摄的图像示意图。如图5所示,由于摄像头100具有低色彩深度且拍摄图像具有低分辨率,摄像头100采集的人脸图像通过图像识别后仅能够识别出人脸1200,并区分出圆脸、瓜子脸或者国字脸。可选的,还可以区分出区别较大的五官特征,例如可区分出浓眉和淡眉,大眼睛和小眼睛等。但是电子设备10通过图像识别后无法区分细节特征,例如是否双眼皮。
可选的,电子设备10通过图像识别后也无法识别所处环境1300,或者仅能确定所处环境1300为室内或者室外。
由于与摄像头200相比,上述摄像头100具有低色彩深度且摄像头100上像敏单元阵列使得拍摄的图像具有低分辨率,携带更少的信息。与现有的摄像头拍摄的图像具有丰富的信息相比,使用摄像头100拍摄图像进行人脸识别、环境检测等,减少了常开被泄露对用户信息的安全性和私密性造成的危害,提高用户信息的安全性。
不限于上述人脸识别的示例,电子设备10为智能音箱时,智能音箱可针对用户A和用户B分别设置播放列表。例如A为老人,B为小孩,由于摄像头100具有低色彩深度且拍摄图像具有低分辨率,电子设备10通过摄像头100采集的人脸图像进行图像识别后仅能够区分出人脸图像是1200是老人还是小孩。电子设备10通过对摄像头100采集的图像数据进行识别,当识别结果指示“老人的人脸图像”时,音箱可播放老人对应的播放列表,例如戏曲列表。电子设备10通过对摄像头100采集的图像进行识别,当识别结果指示“小孩的人脸图像”时,音箱可播放小孩对应的播放列表,例如儿歌列表。
③手势检测示例
电子设备10可储存手势对应的特征,具体的,可存储手掌展开到握紧这一手势对应的特征并存储手掌握紧到展开这一手势对应的特征。在电子设备10进行语音通话过程中,将摄像头100采集的图像数据的特征与手掌展开到握紧这一手势对应的特征进行匹配。匹配成功则输出分析结果指示:手掌展开到握紧手势。如果检测到当前电子设备10未被设置为静音,即当前电子设备的麦克风未被关闭,则可根据该分析结果设置电子设备10为静音。类似的,电子设备10将摄像头100采集的图像数据的特征与手掌握紧到展开这一手势对应的特征进行匹配。匹配成功则输出分析结果指示:手掌握紧到展开手势。如果检测到当前电子设备10被设置为静音,即当前电子设备10的麦克风被关闭,则可根据该分析结果关闭电子设备10为静音,开启麦克风。
④环境检测示例
电子设备10根据摄像头100采集的图像数据进行环境检测。示例性的,电子设备10可储存环境模板对应的特征。环境模板可包括车内驾驶位环境。电子设备10将摄像头100采集的图像数据的特征与车内驾驶位环境的特征进行匹配。匹配成功则输出分析结果指示:车内驾驶位环境。当检测结果指示“车内驾驶位环境”时,如果检测到当前电子设备10未被设置为驾驶模式,则电子设备10可根据该分析结果设置为驾驶模式。当检测结果指示“车内驾驶位环境”时,如果电子设备10检测到当前已被设置为驾驶模式时,则结束。
本申请实施例中,不同的环境检测场景,对摄像头传感器输出的图像数据的色彩深度和拍摄图像的分辨率要求不同。如果对摄像头色彩深度和拍摄图像的分辨率要求高,例如需要使用电子设备10识别出室内天花板某个位置,从而实现在电子设备10被放置桌面上摄像头拍摄到天花板该位置上时,电子设备10提示天气情况。摄像头100传感器输出的图像数据的色彩深度和拍摄图像的分辨率可较高,分辨率为200×200,色彩深度为4位。如果对摄像头传感器输出的图像数据的色彩深度和拍摄图像的分辨率要求较低,例如仅要求摄像头能够进行人脸检测,以实现用户观看电子设备显示屏时保持显示屏常亮。摄像头100传感器输出的图像数据的色彩深度和拍摄图像的分辨率可较低,分辨率为144×96或者128×96,色彩深度为3位。
在一种可实现方式中,一方面,电子设备10可通过摄像头100检测一些明显、典型的场景,例如室内环境、户外环境以及前述车内驾驶环境等。另一方面,与图1所示摄像头200相比,该摄像头100可仅检测明显、典型的场景,不会一直对用户的准确、精细的环境和用 户的人脸细节信息进行检测和跟踪。从而实现即可以识别一些场景,但又不会检测和跟踪用户的准确信息,减少了用户隐私信息的泄露。
可以理解的,上述对人脸检测、人脸识别、手势检测和环境检测的举例仅用于解释本申请实施例,不应构成限定。上述人脸检测、人脸识别、手势检测和环境检测均为图像识别。电子设备10还可根据识别结果执行其他的操作,本申请实施例对此不作限定。
在本申请的一些实施例中,电子设备还可实施为智能摄像头、包含摄像头的智能家用电器(冰箱、台灯、空气净化器等)等。下面以智能摄像头用于家庭监控的场景进行举例。
请参阅图6,图6是本申请实施例提供的一种智能摄像头的硬件模块600和软件模块700的结构示意图。该摄像头可应用在家庭监控的场景。如图6所示,智能摄像头包含硬件模块600和软件模块700。硬件模块600包含镜头组610、传感器620和图像信号处理单元630。
其中,本申请实施例中的智能摄像头具有低色彩深度且拍摄的图像具有低分辨率。示例性的,智能摄像头拍摄图像的分辨率为144×96,色彩深度为4位,每个像素可输出白或三基色(红、绿、蓝)中某一种的16个等级(2的4次方)。本申请实施例中,传感器620使得智能摄像头具有低色彩深度且拍摄图像具有低分辨率。
本申请实施例中,关于镜头组610、传感器620和图像信号处理单元630可参考图3所示示例中镜头组410、传感器420和图像信号处理单元430的描述,这里不再赘述。
如图6所示,软件模块700,接收来自图像处理单元630的图像数据,来进行人体检测、姿态检测等。其中,人体检测用来确定硬件模块600采集的图像数据是否为人体图像数据。姿态检测用于识别硬件模块600检测到的人体图像数据中人体的姿态,例如摔倒、起立、躺下等姿态。软件模块700在进行人体检测、姿态检测后输出分析结果,智能摄像头根据分析结果执行操作。下面分别对人体检测、姿态检测举例说明。
a.人体检测示例
智能摄像头将采集的图像数据的特征与预存的人体特征进行匹配,匹配成功则输出分析结果指示:人体图像。如果用户设置了智能摄像头检测到人体图像则向用户手机发送报警消息,则当检测结果指示“人体图像”时,智能摄像头可根据该分析结果向用户手机发送报警消息。其中,智能摄像头可存储有第一指令,第一指令指示检测到人体图像则向终端(用户手机)发送报警消息。
b.姿态检测示例
智能摄像头将采集的图像数据的特征与预存的人体姿态的特征进行匹配,如果与摔倒姿态匹配成功则输出分析结果指示:摔倒姿态。如果用户设置了智能摄像头检测到人体摔倒姿态则向用户手机发送报警消息并拨打紧急电话,则当检测结果指示“摔倒姿态”时,智能摄像头可根据该分析结果向用户手机发送报警消息或者拨打紧急电话。其中,智能摄像头可存储有第二指令,第二指令指示检测到摔倒姿态则向终端(用户手机)发送报警消息,或者拨打紧急电话。
上述使用智能摄像头执行家庭监控的示例中,与现有的摄像头相比,常开的智能摄像头具有低分辨率和低色彩深度,可减少拍摄的图像上丰富的信息被泄露对用户信息的安全性和私密性造成的危害,提高用户信息的安全性。
可以理解的,智能摄像头不限于执行人体检测和姿势检测,还可以用于其他检测。上述对智能摄像头进行人体检测和姿势检测的举例仅用于解释本申请实施例,不应构成限定。智 能摄像头还可根据检测结果执行其他的操作,本申请实施例对此不作限定。
请参阅图7,图7是本申请实施例提供的一种智能摄像头的结构示意图。如图7所示,该智能摄像头20包括硬件模块600、一个或多个处理器800、存储器900、通信接口1000,硬件模块600、处理器800、存储器900、通信接口1000可通过总线或者其它方式连接,本申请实施例以通过总线1100连接为例。硬件模块600可以是图3所描述示例中的硬件模块600。其中:
处理器800可以由一个或者多个通用处理器构成,例如CPU。
通信接口1000可以为有线接口(例如以太网接口)或无线接口(例如蜂窝网络接口或使用无线局域网接口),用于与其他节点进行通信。本申请实施例中,通信接口1000具体可用于与手机等电子设备进行通信。
存储器900可以包括易失性存储器(volatile memory),例如RAM;存储器也可以包括非易失性存储器(non-vlatile memory),例如ROM、快闪存储器(flash memory)、HDD或固态硬盘SSD;存储器900还可以包括上述种类的存储器的组合。存储器900可用于存储一组程序代码,以便于处理器800调用存储器900中存储的程序代码以实现图3所示示例中的软件模块。
需要说明的,图7所示的智能摄像头仅仅是本申请实施例的一种实现方式,实际应用中,智能摄像头还可以包括更多或更少的部件,这里不作限制。
可选的,不限于上述智能摄像头,上述低分辨率、低色彩深度的摄像头还可以应用在包含摄像头的音箱、车载摄像头或者包含摄像头的智能家居设备(如智能冰箱),本申请实施例对此不作限制。则上述音箱、车载摄像头和智能家居设备的结构可参考图7所示示例。且对于音箱来说,还包括扬声器。对于智能冰箱来说,还包括制冷系统。还可以包括其他更多或更少的部件,这里不作限制。
本申请实施例中,电子设备可包括上述智能摄像头、音箱、车载摄像头或者包含摄像头的智能家居设备。
请参阅图8,图8是本申请实施例提供的一种图像识别方法的流程示意图。如图8所示,该图像识别方法包括S101~S102。
S101、电子设备调用第一摄像头采集第一图像数据。
其中,第一摄像头包含第一传感器,第一传感器包含第一像敏单元阵列,第一像敏单元阵列中像敏单元的数量小于或等于40000,第一传感器输出的图像数据的色彩深度为第一色彩深度,第一色彩深度小于8位。其中,第一摄像头可以是图1所示的摄像头100,其结构可参阅图3所示示例。
其中,第一色彩深度为4位,所述像敏单元阵列横向上和纵向上所述像敏单元的最大数量分别为128和96,或者所述像敏单元阵列横向上和纵向上所述像敏单元的最大数量分别为144和96。
在本申请的一些实施例中,第一摄像头在电子设备工作时,一直开启。
S102、电子设备对第一图像数据进行图像识别,并根据识别结果和当前的状态,执行相关操作。
上述的图像识别方法中,与第二摄像头相比,第一摄像头具有低色彩深度且拍摄图像具 有低分辨率,携带更少的信息。从而减少了第一摄像头拍摄的图像丰富的信息被泄露对用户信息的安全性和私密性造成的危害,提高用户信息的安全性。
在本申请的一些实施例中,当前的状态为电子设备处于锁定状态;电子设备执行相关操作,包括:电子设备调用第二摄像头进行人脸解锁。关于上述过程,可参考前述人脸检测的示例的具体描述,这里不再赘述。将第一摄像头常开采集图像数据,而与第二摄像头相比,第一摄像头具有低色彩深度且拍摄图像具有低分辨率,携带更少的信息。从而减少了常开摄像头拍摄的图像丰富的信息被泄露对用户信息的安全性和私密性造成的危害,提高用户信息的安全性。使用第二摄像头进行人脸识别以解锁电子设备,第二摄像头具有高色彩深度且拍摄图像具有高分辨率,可在保证用户信息安全性的同时提升人脸识别的准确性。
其中:第二摄像头包含第二传感器,第二传感器包含第二像敏单元阵列,第二像敏单元阵列中像敏单元的数量大于第一像敏单元阵列中像敏单元的数量,第二传感器输出的图像数据的色彩深度为第二色彩深度,第二色彩深度大于第一色彩深度。第二摄像头可以是图1所示示例中的摄像头200。
在本申请的一些实施例中,第一像敏单元阵列的面积和第二像敏单元阵列的面积相同的情况下,第一像敏单元阵列上像敏单元的密度小于第二像敏单元阵列上像敏单元的密度;第一像敏单元阵列上像敏单元的密度和第二像敏单元阵列上像敏单元的密度相同的情况下,第一像敏单元阵列的面积小于第二像敏单元阵列的面积。
在本申请的另一些实施例中,第一传感器包含第一模数转换电路,第一模数转换电路中模拟侧电路的检测精度与第一色彩深度相关;第二摄像头包含第二传感器,第二传感器包含第二模数转换电路,第二传感器输出的图像数据的色彩深度为第二色彩深度,第二模数转换电路中模拟侧电路的检测精度与第二色彩深度相关;第一模数转换电路中模拟侧电路的检测精度小于第二模数转换电路中模拟侧电路的检测精度,第一色彩深度小于第二色彩深度。关于第一传感器的描述可以参考图3所示传感器420的描述,这里不再赘述。
在本申请的一些实施例中,识别结果为检测到人脸图像,当前的状态为电子设备处于亮屏状态;电子设备执行相关操作,包括:将电子设备的显示屏设置为常亮状态。关于上述过程,可参考前述人脸检测的示例的具体描述,这里不再赘述。上述方法中,常开的第一摄像头可实时检测人脸图像,检测到人脸图像则设置显示屏常亮,从而为用户带来便利性。而与第二摄像头相比,该常开的第一摄像头具有低色彩深度且拍摄图像具有低分辨率,携带更少的信息。从而减少了第一摄像头拍摄的图像丰富的信息被泄露对用户信息的安全性和私密性造成的危害,提高用户信息的安全性。
在本申请的一些实施例中,识别结果为检测到第一用户的人脸图像,当前的状态为电子设备处于锁定状态;电子设备执行相关操作,包括:电子设备调用第二摄像头进行人脸识别,识别成功则显示第一用户对应的用户桌面。关于上述过程,可参考前述人脸识别示例的具体描述,这里不再赘述。
上述使用第一摄像头和第二摄像头配合实现电子设备解锁的方案中,利用常开的第一摄像头感知人脸图像并识别用户,从而可提升电子设备人脸图像识别的能力,也可保证使用同一电子设备的用户之间信息的隔离性,提高用户解锁电子设备的便利性。另外,利用第一摄像头执行人脸识别成功后,再调用具有高色彩深度且拍摄图像具有高分辨率的第二摄像头再次执行人脸识别,识别成功后才解锁电子设备,可提高人脸解锁的可靠性和安全性。
在本申请的一些实施例中,当前状态为电子设备显示应用界面;识别结果为检测到人脸 图像的上方向与应用界面的上方向之间角度偏差大于或等于第一阈值;电子设备执行相关操作,包括:电子设备调整应用界面的上方向,将应用界面的上方向与人脸图像的上方向之间角度偏差调整到小于或等于第一阈值。关于上述过程,可参考前述人脸识别示例的具体描述,这里不再赘述。
上述方法中,常开的第一摄像头可实时检测人脸图像的上方向和应用界面的上方向之间的夹角,根据该夹角进行横竖屏切换,从而为用户带来便利性。而与第二摄像头相比,该常开的第一摄像头具有低色彩深度且拍摄图像具有低分辨率,携带更少的信息。从而减少了第一摄像头拍摄的图像丰富的信息被泄露对用户信息的安全性和私密性造成的危害,提高用户信息的安全性。
在本申请的一些实施例中,识别结果为检测到车内驾驶位环境,当前的状态为电子设备处于工作状态;电子设备执行相关操作,包括:电子设备开启驾驶模式,驾驶模式下电子设备能够播报来电信息和消息,且能够响应用户语音执行相关操作。关于上述过程,可参考前述环境检测示例的具体描述,这里不再赘述。
上述方法中,常开的第一摄像头可实时检测当前所处的环境,根据该环境为驾驶位环境切换为驾驶模式,从而为用户带来便利性。而与第二摄像头相比,该常开的第一摄像头具有低色彩深度且拍摄图像具有低分辨率,携带更少的信息。从而减少了第一摄像头拍摄的图像丰富的信息被泄露对用户信息的安全性和私密性造成的危害,提高用户信息的安全性。
在本申请的一些实施例中,识别结果为检测到人体图像,当前的状态为电子设备存储有第一指令,第一指令指示检测到人体图像则向终端发送报警消息;电子设备执行相关操作,包括:电子设备向终端发送报警消息。关于上述过程,可参考前述人体检测示例的具体描述,这里不再赘述。
在本申请的一些实施例中,识别结果为检测到摔倒姿态,当前的状态为电子设备存储有第二指令,第二指令指示检测到摔倒姿态则向终端发送报警消息;电子设备执行相关操作,包括:电子设备向终端发送报警消息。关于上述过程,可参考前述姿态检测示例的具体描述,这里不再赘述。
上述方法中,在家庭安防场景下常开的第一摄像头可实时检测人体图像,检测到人体图像则根据设置报警,或者检测到摔倒姿态则根据设置报警,从而为用户带来便利性。与现有的摄像头相比,该常开的第一摄像头具有低色彩深度且拍摄图像具有低分辨率,携带更少的信息。从而减少了第一摄像头拍摄的图像丰富的信息被泄露对用户信息的安全性和私密性造成的危害,提高用户信息的安全性。
本申请实施例还提供一种电子设备,包括:一个或多个处理器、存储器和第一摄像头,其中:所述第一摄像头包含第一传感器,所述第一传感器包含第一像敏单元阵列,所述第一像敏单元阵列中像敏单元的数量小于或等于40000,所述第一传感器输出的图像数据的色彩深度为第一色彩深度,所述第一色彩深度小于8位;所述存储器用于存储计算机程序代码,所述计算机程序代码包括计算机指令,当所述一个或多个处理器执行所述计算机指令时,使得所述电子设备执行图8所述的图像识别方法。
其中,该电子设备可以是图1或者图2所示出的电子设备。
本申请实施例还提供了一种计算机可读存储介质,该计算机可读存储介质中存储有指令,当其在计算机或处理器上运行时,使得计算机或处理器执行上述任一个方法中的一个或多个 步骤。
本申请实施例还提供了一种包含指令的计算机程序产品。当该计算机程序产品在计算机或处理器上运行时,使得计算机或处理器执行上述任一个方法中的一个或多个步骤。
在上述实施例中,全部或部分功能可以通过软件、硬件、或者软件加硬件的组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本申请实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者半导体介质(例如,固态硬盘(solid state disk,SSD))等。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,该流程可以由计算机程序来指令相关的硬件完成,该程序可存储于计算机可读取存储介质中,该程序在执行时,可包括如上述各方法实施例的流程。而前述的存储介质包括:ROM或随机存储记忆体RAM、磁碟或者光盘等各种可存储程序代码的介质。

Claims (14)

  1. 一种图像识别方法,其特征在于,所述方法包括:
    电子设备调用第一摄像头采集第一图像数据;
    所述第一摄像头包含第一传感器,所述第一传感器包含第一像敏单元阵列,所述第一像敏单元阵列中像敏单元的数量小于或等于40000,所述第一传感器输出的图像数据的色彩深度为第一色彩深度,所述第一色彩深度小于8位。
  2. 根据权利要求1所述的方法,其特征在于,所述第一摄像头在所述电子设备工作时,一直开启。
  3. 根据权利要求1或2所述的方法,其特征在于,所述电子设备调用第一摄像头采集第一图像数据之后,所述方法还包括:
    所述电子设备对所述第一图像数据进行图像识别,并根据识别结果和当前的状态,执行相关操作。
  4. 根据权利要求1至3任一项所述的方法,其特征在于,所述第一色彩深度为4位,所述像敏单元阵列横向上和纵向上所述像敏单元的最大数量分别为128和96,或者所述像敏单元阵列横向上和纵向上所述像敏单元的最大数量分别为144和96。
  5. 根据权利要求3所述的方法,其特征在于,所述识别结果为检测到人脸图像,所述当前的状态为所述电子设备处于锁定状态;所述电子设备执行相关操作,包括:
    所述电子设备调用第二摄像头进行人脸解锁;
    其中:所述第二摄像头包含第二传感器,所述第二传感器包含第二像敏单元阵列,所述第二像敏单元阵列中所述像敏单元的数量大于所述第一像敏单元阵列中所述像敏单元的数量,所述第二传感器输出的图像数据的色彩深度为第二色彩深度,所述第二色彩深度大于所述第一色彩深度。
  6. 根据权利要求3所述的方法,其特征在于,所述识别结果为检测到人脸图像,所述当前的状态为所述电子设备处于亮屏状态;所述电子设备执行相关操作,包括:
    将所述电子设备的显示屏设置为常亮状态。
  7. 根据权利要求3所述的方法,其特征在于,所述识别结果为检测到第一用户的人脸图像,所述当前的状态为所述电子设备处于锁定状态;所述电子设备执行相关操作,包括:
    所述电子设备调用第二摄像头进行人脸识别,识别成功则显示所述第一用户对应的用户桌面;
    其中:所述第二摄像头包含第二传感器,所述第二传感器包含第二像敏单元阵列,所述第二像敏单元阵列中所述像敏单元的数量大于所述第一像敏单元阵列中所述像敏单元的数量,所述第二传感器输出的图像数据的色彩深度为第二色彩深度,所述第二色彩深度大于所述第一色彩深度。
  8. 根据权利要求3所述的方法,其特征在于,所述当前状态为所述电子设备显示应用界面;所述识别结果为检测到人脸图像的上方向与所述应用界面的上方向之间角度偏差大于或等于第一阈值;所述电子设备执行相关操作,包括:
    所述电子设备调整所述应用界面的上方向,将所述应用界面的上方向与所述人脸图像的上方向之间角度偏差调整到小于或等于所述第一阈值。
  9. 根据权利要求3所述的方法,其特征在于,所述识别结果为检测到车内驾驶位环境,所述当前的状态为所述电子设备处于工作状态;所述电子设备执行相关操作,包括:
    所述电子设备开启驾驶模式,所述驾驶模式下所述电子设备能够播报来电信息和消息,且能够响应用户语音执行相关操作。
  10. 根据权利要求5或7所述的方法,其特征在于,所述第一像敏单元阵列的面积和所述第二像敏单元阵列的面积相同的情况下,所述第一像敏单元阵列上像敏单元的密度小于所述第二像敏单元阵列上像敏单元的密度;
    所述第一像敏单元阵列上所述像敏单元的密度和所述第二像敏单元阵列上所述像敏单元的密度相同的情况下,所述第一像敏单元阵列的面积小于所述第二像敏单元阵列的面积。
  11. 根据权利要求1至10任一项所述的方法,其特征在于,所述第一传感器包含第一模数转换电路,所述第一模数转换电路中模拟侧电路的检测精度与所述第一色彩深度相关;
    所述电子设备包括第二摄像头,所述第二摄像头包含第二传感器,所述第二传感器包含第二模数转换电路,所述第二传感器输出的图像数据的色彩深度为第二色彩深度,所述第二模数转换电路中模拟侧电路的检测精度与所述第二色彩深度相关;
    所述第一模数转换电路中模拟侧电路的检测精度小于所述第二模数转换电路中模拟侧电路的检测精度,所述第一色彩深度小于所述第二色彩深度。
  12. 根据权利要求3所述的方法,其特征在于,所述识别结果为检测到人体图像,所述当前的状态为所述电子设备存储有第一指令,所述第一指令指示检测到所述人体图像则向终端发送报警消息;所述电子设备执行相关操作,包括:
    所述电子设备向所述终端发送所述报警消息。
  13. 根据权利要求3所述的方法,其特征在于,所述识别结果为检测到摔倒姿态,所述当前的状态为所述电子设备存储有第二指令,所述第二指令指示检测到所述摔倒姿态则向终端发送报警消息;所述电子设备执行相关操作,包括:
    所述电子设备向所述终端发送所述报警消息。
  14. 一种电子设备,其特征在于,所述电子设备包括:一个或多个处理器、存储器和第一摄像头,其中:
    所述第一摄像头包含第一传感器,所述第一传感器包含第一像敏单元阵列,所述第一像敏单元阵列中像敏单元的数量小于或等于40000,所述第一传感器输出的图像数据的色彩深 度为第一色彩深度,所述第一色彩深度小于8位;
    所述存储器用于存储计算机程序代码,所述计算机程序代码包括计算机指令,当所述一个或多个处理器执行所述计算机指令时,使得所述电子设备执行如权利要求1至13中任一项所述的图像识别方法。
PCT/CN2020/083033 2019-04-11 2020-04-02 图像识别方法和电子设备 WO2020207328A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910289400.1A CN110119684A (zh) 2019-04-11 2019-04-11 图像识别方法和电子设备
CN201910289400.1 2019-04-11

Publications (1)

Publication Number Publication Date
WO2020207328A1 true WO2020207328A1 (zh) 2020-10-15

Family

ID=67520989

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/083033 WO2020207328A1 (zh) 2019-04-11 2020-04-02 图像识别方法和电子设备

Country Status (2)

Country Link
CN (1) CN110119684A (zh)
WO (1) WO2020207328A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112597800A (zh) * 2020-11-24 2021-04-02 安徽天虹数码科技股份有限公司 一种录播系统中学生起坐动作的检测方法及系统

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110119684A (zh) * 2019-04-11 2019-08-13 华为技术有限公司 图像识别方法和电子设备
CN110758241B (zh) 2019-08-30 2022-03-11 华为技术有限公司 乘员保护方法及装置
CN110658906A (zh) * 2019-08-30 2020-01-07 华为技术有限公司 显示的方法及电子设备
CN111866393B (zh) * 2020-07-31 2022-01-14 Oppo广东移动通信有限公司 显示控制方法、装置及存储介质
CN114697474B (zh) * 2020-12-25 2023-07-21 Oppo广东移动通信有限公司 电子设备的控制方法、电子设备及计算机可读存储介质
CN112702527A (zh) * 2020-12-28 2021-04-23 维沃移动通信(杭州)有限公司 图像拍摄方法、装置及电子设备
CN112992796A (zh) * 2021-02-09 2021-06-18 深圳市众芯诺科技有限公司 一种智能视觉音箱芯片
CN113571538A (zh) * 2021-06-24 2021-10-29 维沃移动通信有限公司 像素结构、图像传感器、控制方法及装置、电子设备
CN116301362B (zh) * 2023-02-27 2024-04-05 荣耀终端有限公司 图像处理方法、电子设备及存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080056539A1 (en) * 2006-09-01 2008-03-06 Handshot, Llc Method and system for capturing fingerprints, palm prints and hand geometry
CN102164231A (zh) * 2011-01-24 2011-08-24 南京壹进制信息技术有限公司 基于敏感信息的分辨率可动态调节的监控装置及方法
US20150172539A1 (en) * 2013-12-17 2015-06-18 Amazon Technologies, Inc. Distributing processing for imaging processing
CN107734129A (zh) * 2017-09-27 2018-02-23 广东欧珀移动通信有限公司 解锁控制方法及相关产品
CN108711054A (zh) * 2018-04-28 2018-10-26 Oppo广东移动通信有限公司 图像处理方法、装置、计算机可读存储介质和电子设备
CN110119684A (zh) * 2019-04-11 2019-08-13 华为技术有限公司 图像识别方法和电子设备

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109271916B (zh) * 2018-09-10 2020-09-18 Oppo广东移动通信有限公司 电子装置及其控制方法、控制装置和计算机可读存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080056539A1 (en) * 2006-09-01 2008-03-06 Handshot, Llc Method and system for capturing fingerprints, palm prints and hand geometry
CN102164231A (zh) * 2011-01-24 2011-08-24 南京壹进制信息技术有限公司 基于敏感信息的分辨率可动态调节的监控装置及方法
US20150172539A1 (en) * 2013-12-17 2015-06-18 Amazon Technologies, Inc. Distributing processing for imaging processing
CN107734129A (zh) * 2017-09-27 2018-02-23 广东欧珀移动通信有限公司 解锁控制方法及相关产品
CN108711054A (zh) * 2018-04-28 2018-10-26 Oppo广东移动通信有限公司 图像处理方法、装置、计算机可读存储介质和电子设备
CN110119684A (zh) * 2019-04-11 2019-08-13 华为技术有限公司 图像识别方法和电子设备

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112597800A (zh) * 2020-11-24 2021-04-02 安徽天虹数码科技股份有限公司 一种录播系统中学生起坐动作的检测方法及系统
CN112597800B (zh) * 2020-11-24 2024-01-26 安徽天虹数码科技股份有限公司 一种录播系统中学生起坐动作的检测方法及系统

Also Published As

Publication number Publication date
CN110119684A (zh) 2019-08-13

Similar Documents

Publication Publication Date Title
WO2020207328A1 (zh) 图像识别方法和电子设备
EP4221164A1 (en) Display method for electronic device with flexible display and electronic device
WO2020259038A1 (zh) 一种拍摄方法及设备
CN110035141B (zh) 一种拍摄方法及设备
WO2020244623A1 (zh) 一种空鼠模式实现方法及相关设备
CN112860428A (zh) 一种高能效的显示处理方法及设备
WO2022001619A1 (zh) 一种截屏方法及电子设备
WO2021190314A1 (zh) 触控屏的滑动响应控制方法及装置、电子设备
WO2020019355A1 (zh) 一种可穿戴设备的触控方法、可穿戴设备及系统
CN110012130A (zh) 一种具有折叠屏的电子设备的控制方法及电子设备
WO2021175266A1 (zh) 身份验证方法、装置和电子设备
WO2022022319A1 (zh) 一种图像处理方法、电子设备、图像处理系统及芯片系统
CN111835907A (zh) 一种跨电子设备转接服务的方法、设备以及系统
US20240338163A1 (en) Multi-screen unlocking method and electronic device
WO2023005706A1 (zh) 设备控制方法、电子设备及存储介质
WO2021031745A1 (zh) 一种应用打开方法和电子设备
WO2020051852A1 (zh) 一种通信过程中信息的记录及显示方法及终端
CN114089902A (zh) 手势交互方法、装置及终端设备
CN113496477A (zh) 屏幕检测方法及电子设备
WO2022037405A1 (zh) 信息验证的方法、电子设备及计算机可读存储介质
CN114120987B (zh) 一种语音唤醒方法、电子设备及芯片系统
CN111885768B (zh) 调节光源的方法、电子设备和系统
CN115525366A (zh) 一种投屏方法及相关装置
CN113467747A (zh) 音量调节方法、电子设备、存储介质及计算机程序产品
CN114116610A (zh) 获取存储信息的方法、装置、电子设备和介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20788413

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20788413

Country of ref document: EP

Kind code of ref document: A1