WO2021237493A1 - 图像处理方法及装置、相机组件、电子设备、存储介质 - Google Patents

图像处理方法及装置、相机组件、电子设备、存储介质 Download PDF

Info

Publication number
WO2021237493A1
WO2021237493A1 PCT/CN2020/092507 CN2020092507W WO2021237493A1 WO 2021237493 A1 WO2021237493 A1 WO 2021237493A1 CN 2020092507 W CN2020092507 W CN 2020092507W WO 2021237493 A1 WO2021237493 A1 WO 2021237493A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
sub
infrared
camera module
light
Prior art date
Application number
PCT/CN2020/092507
Other languages
English (en)
French (fr)
Inventor
徐晶
刘霖
朱丹
Original Assignee
北京小米移动软件有限公司南京分公司
北京小米移动软件有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京小米移动软件有限公司南京分公司, 北京小米移动软件有限公司 filed Critical 北京小米移动软件有限公司南京分公司
Priority to US17/274,044 priority Critical patent/US20230076534A1/en
Priority to JP2020562134A priority patent/JP7321187B2/ja
Priority to EP20824074.7A priority patent/EP3941042A4/en
Priority to CN202080001848.XA priority patent/CN114073063B/zh
Priority to KR1020217006592A priority patent/KR102458470B1/ko
Priority to PCT/CN2020/092507 priority patent/WO2021237493A1/zh
Publication of WO2021237493A1 publication Critical patent/WO2021237493A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/271Image signal generators wherein the generated image signals comprise depth maps or disparity maps
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/128Adjusting depth or disparity
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/239Image signal generators using stereoscopic image cameras using two 2D image sensors having a relative position equal to or related to the interocular distance
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/254Image signal generators using stereoscopic image cameras in combination with electromagnetic radiation sources for illuminating objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/10Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths
    • H04N23/11Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths for generating image signals from visible and infrared light wavelengths
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/50Constructional details
    • H04N23/55Optical parts specially adapted for electronic image sensors; Mounting thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/56Cameras or camera modules comprising electronic image sensors; Control thereof provided with illuminating means
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/69Control of means for changing angle of the field of view, e.g. optical zoom objectives or electronic zooming
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/30Transforming light or analogous information into electric information
    • H04N5/33Transforming infrared radiation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N2013/0074Stereoscopic image analysis
    • H04N2013/0081Depth or disparity estimation from stereoscopic image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/45Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from two or more image sensors being of different type or operating in different modes, e.g. with a CMOS sensor for moving images in combination with a charge-coupled device [CCD] for still images
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/10Circuitry of solid-state image sensors [SSIS]; Control thereof for transforming different wavelengths into image signals
    • H04N25/11Arrangement of colour filter arrays [CFA]; Filter mosaics
    • H04N25/13Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements
    • H04N25/131Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements including elements passing infrared wavelengths
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/10Circuitry of solid-state image sensors [SSIS]; Control thereof for transforming different wavelengths into image signals
    • H04N25/11Arrangement of colour filter arrays [CFA]; Filter mosaics
    • H04N25/13Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements
    • H04N25/134Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements based on three different wavelength filter elements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/10Circuitry of solid-state image sensors [SSIS]; Control thereof for transforming different wavelengths into image signals
    • H04N25/11Arrangement of colour filter arrays [CFA]; Filter mosaics
    • H04N25/13Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements
    • H04N25/135Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements based on four or more different wavelength filter elements

Definitions

  • the present disclosure relates to the field of image processing technology, and in particular to an image processing method and device, camera assembly, electronic equipment, and storage medium.
  • the depth cameras can include array camera modules, structured light modules, and TOF (Time-of-fly) modules.
  • the working principle of the group can get in-depth information.
  • the aforementioned camera array requires a separate depth camera, which occupies valuable space of the electronic device, which is not conducive to miniaturization and cost reduction of the electronic device.
  • the present disclosure provides an image processing method and device, camera assembly, electronic equipment, and storage medium.
  • a camera assembly including: a first camera module that senses light of a first waveband, a second camera module that senses light of a first waveband and a light of a second waveband, and emits light.
  • the first camera module is configured to generate a first image under the control of the processor
  • the infrared light source is used to emit light of the second waveband under the control of the processor;
  • the second camera module is configured to generate a second image under the control of the processor; the second image includes a Bayer sub-image generated by sensing light in the first waveband and infrared light generated by sensing the light in the second waveband Sub-image
  • the processor is further configured to perform image processing on at least one of the Bayer sub-image and the infrared sub-image and the first image.
  • the infrared light source includes at least one of the following: an infrared flood light source, a structured light source, or a TOF light source.
  • the field of view angles of the lenses in the first camera module and the second camera module are different.
  • an image processing method including:
  • the second image includes the Bayer sub-image generated by the second camera module sensing the first waveband light and the sensing Infrared sub-images generated by light in the second band;
  • Image processing is performed on at least one of the Bayer sub-image and the infrared sub-image and the first image.
  • performing image processing on at least one of the Bayer sub-image and the infrared sub-image and the first image includes:
  • performing image processing on at least one of the Bayer sub-image and the infrared sub-image and the first image includes:
  • performing image processing on at least one of the Bayer sub-image and the infrared sub-image and the first image includes:
  • the method further includes:
  • image zooming is performed based on the first image and the Bayer sub-image.
  • an image processing device including:
  • the image acquisition module is used to acquire the first image generated by the first camera module and the second image generated by the second camera module; the second image includes the second image generated by the second camera module sensing light in the first waveband Bayer sub-images and infrared sub-images generated by sensing second-band light;
  • the image processing module is configured to perform image processing on at least one of the Bayer sub-image and the infrared sub-image and the first image.
  • the image processing module includes:
  • the image enhancement unit is used to fuse the infrared sub-image and the first image to enhance the first image.
  • the image processing module includes:
  • the depth image acquisition unit is configured to acquire a visible light depth image according to the Bayer sub-image and the first image.
  • the image processing module includes:
  • the depth integration unit is used to fuse the depth data of the visible light depth image and the infrared sub-image to obtain a deep fusion image.
  • the device further includes:
  • the zoom module is configured to perform image zoom based on the first image and the Bayer sub-image in response to a user's zoom operation.
  • an electronic device including:
  • a memory for storing a computer program executable by the processor
  • the processor is configured to execute the computer program in the memory to implement the steps of any one of the foregoing methods.
  • a readable storage medium on which an executable computer program is stored, and the computer program implements the steps of any one of the above-mentioned methods when the computer program is executed.
  • the first camera module in the camera assembly in the embodiments of the present disclosure can acquire the first image and the second camera module can acquire the second image, and the Bayer sub-image and infrared sub-image can be acquired from the second image.
  • Image which can perform image processing on at least one of the Bayer sub-image and infrared sub-image and the first image, such as obtaining a depth image, that is, the depth image can be obtained without setting a depth camera in the camera module array, which can be zoomed out
  • the volume of the camera assembly reduces the space occupied by the electronic device, which is conducive to the miniaturization and cost reduction of the electronic device.
  • Fig. 1 is a flowchart showing a camera assembly according to an exemplary embodiment.
  • Fig. 2 is a diagram showing an application scenario according to an exemplary embodiment.
  • Fig. 3 is a schematic diagram showing obtaining a visible light depth image according to an exemplary embodiment.
  • Fig. 4 is a flow chart showing a method for acquiring depth data according to an exemplary embodiment.
  • Fig. 5 is a block diagram showing a device for acquiring depth data according to an exemplary embodiment.
  • Fig. 6 is a block diagram showing an electronic device according to an exemplary embodiment.
  • the depth cameras can include array camera modules, structured light modules and TOF (Time-of-flight) modules.
  • the working principle of the group can get in-depth information.
  • the aforementioned camera array requires a separate depth camera, which occupies valuable space of the electronic device, which is not conducive to miniaturization and cost reduction of the electronic device.
  • the embodiments of the present disclosure provide an image processing method and device, a camera assembly, an electronic device, and a storage medium.
  • Fig. 1 is a block diagram showing a camera assembly according to an exemplary embodiment.
  • a camera assembly may include: a first camera module 10, a second camera module 20, an infrared light source 30, and a processor 40;
  • the camera module 20 can sense the first waveband light and the second waveband light.
  • the processor 40 is respectively connected to the first camera module 10, the second camera module 20 and the infrared light source 30.
  • the connection means that the processor 40 can send control instructions and obtain images from the camera modules (10, 20).
  • the specific implementation can be a communication bus, a cache, or wireless, which is not limited here.
  • the first camera module 10 is used to generate a first image under the control of the processor 40; the first image may be an RGB image.
  • the infrared light source 30 is used to emit light of the second waveband under the control of the processor 40.
  • the second camera module 20 is used to generate a second image under the control of the processor 40; the second image may include the Bayer sub-image generated by sensing the first waveband light and the infrared sub-image generated by sensing the second waveband light .
  • the processor 40 is configured to acquire the Bayer sub-image and the infrared sub-image according to the second image, and perform image processing on the first image and at least one of the Bayer sub-image and the infrared sub-image.
  • the first waveband light may be a visible light waveband light
  • the second waveband light may be an infrared light waveband light
  • the first camera module 10 may include image sensors, lenses, infrared light filters and other devices that respond to light in the first wavelength band (such as visible light), and may also include voice coil motors, circuit substrates and other devices.
  • the image sensor can adopt a charge-coupled device (CCD) or a complementary metal oxide semiconductor (Complementary Metal Oxide Semiconductor, CMOS) to respond to the first waveband light; in order to facilitate the first camera module 10 and the second camera module 10 and the second
  • CMOS complementary metal oxide semiconductor
  • the filter may be a color filter array that only responds to visible light bands, such as a bayer template, a CYYM template, or a CYGM template.
  • the installation positions and working principles of the components of the first camera module 10 can be referred to related technologies, which will not be repeated here.
  • the second camera module 20 may include an image sensor, lens, visible light-near infrared light that responds to both the first waveband light (such as the light in the visible light waveband) and the second waveband light (such as the light in the infrared light waveband) at the same time.
  • Devices such as band-pass filters, and may also include devices such as voice coil motors and circuit boards.
  • the image sensor that responds to the first waveband light and the second waveband light at the same time can be realized by CCD or CMOS, and its filter can be an RGBIR template, an RGBW template, etc., a color filter array that simultaneously responds to visible light and infrared light.
  • the installation positions and working principles of the components of the second camera module 20 can be referred to related technologies, which will not be repeated here.
  • the infrared light source 30 may include at least one of the following: an infrared flood light source, a structured light source, or a TOF (Time of Flight) light source.
  • the working principle of the infrared flood light source is a light source that increases the brightness of infrared illumination for objects within the viewing range
  • the working principle of the structured light source is to project specific light information on the surface of the object and after the background, according to the change of the light signal caused by the object
  • the working principle of the TOF light source is to project infrared pulses into the viewing area, and calculate the distance of the object based on the round-trip time of the infrared pulse.
  • the processor 40 may be implemented by a separate microprocessor, or may be implemented by a processor of an electronic device provided with the aforementioned camera component.
  • the processor 40 has the following two functions:
  • one or a combination of operating signals from keys, microphones, and image sensors can be received to control the first camera module 10, the second camera module 20, and the infrared light source 30.
  • the processor 40 can adjust the focal length, brightness and other parameters of the first camera module 10, and can control the first camera module 10 to take an image when it detects the user's action of pressing the shutter.
  • the processor 40 can turn on the infrared light source 30 and adjust the first camera module 10 and the first camera module 10 at the same time.
  • the second camera module 20 controls the first camera module 10 to take an image to obtain a first image and controls the second camera module 20 to take an image to obtain a second image when detecting the user’s pressing of the shutter with parameters such as focal length and brightness.
  • the electronic device can process the first image and the second image to obtain a visible light depth image or a depth fusion image.
  • the processor may extract the Bayer sub-image corresponding to the first waveband light from the second image, and calculate the visible light depth image based on the first image and the Bayer sub-image.
  • the calculation process is as follows:
  • P is a certain point of the object to be measured (ie, the shooting object) within the viewing range
  • CR and CL are the optical centers of the first camera and the second camera, respectively
  • the imaging points of point P on the photoreceptors of the two cameras are respectively Is PR and PL (the imaging plane of the camera is placed in front of the lens after being rotated)
  • f is the focal length of the camera
  • B is the center distance between the two cameras
  • Z is the depth to be detected.
  • focal length f, camera center distance B, point P in the right image plane coordinate XR and P point in the left image plane coordinate XL can be obtained by calibration, so you need to obtain (XR-XL) to get the depth.
  • f, B, XR, and XL can be determined through calibration, calibration and matching work, and the calibration, calibration and matching work can refer to related technologies, which will not be repeated here.
  • the processor 40 may repeat the above steps to obtain the depth of all pixels in the first image, and obtain a visible light depth image.
  • the visible light depth image can be used in business scenarios such as large aperture, face/iris unlocking, face/iris payment, 3D beauty, studio light effects, and Animoji.
  • the angle of view of each camera in the first camera module 10 and the second camera module 20 may be different, and the size relationship between the two is not limited.
  • the processor 40 may combine the angles of view of the two cameras to crop images of corresponding sizes from the first image and the second image, for example, crop one from the Bayer sub-image extracted from the second image.
  • a smaller frame size image is cropped from the first image, that is, the image cropped from the Bayer sub-image of the second image is larger than the cropped image from the first image
  • the images are then displayed one after another, so that a zoom effect can be achieved, that is, a shooting effect similar to an optical zoom can be achieved in this embodiment, which is beneficial to improve the shooting experience.
  • the processor 40 may extract the infrared sub-image generated by sensing the second waveband light from the second image. Due to the high frequency in the frequency domain of the infrared sub-image, The information is richer than the information in the frequency domain of the first image, so that the infrared sub-image and the first image can be merged, for example, the high-frequency information of the infrared sub-image is extracted and added to the frequency domain of the first image to achieve the effect of increasing the first image , Can make the details of the first image after fusion richer, higher definition, and more accurate colors.
  • infrared sub-images can also be used for biological recognition functions in electronic devices, such as fingerprint unlocking, face recognition and other scenarios.
  • the processor 40 may also obtain infrared light depth data based on infrared sub-images.
  • the processor 40 can control the infrared light source 30 to project a beam of light in a specific direction onto an object or background and other shooting objects, and obtain parameters such as the intensity of the echo signal of the beam or the size of the spot; based on the correspondence between the preset parameters and the distance
  • the processor 40 may obtain the infrared light depth data from the photographed object to the camera, and the infrared light depth data may include texture information of the photographed object such as an object or a background relative to a visible light depth image.
  • the processor 40 can choose to use the visible light depth image or infrared light depth data according to the specific scene, for example, in a high-light scene (that is, the ambient brightness value is greater than a preset brightness value, such as a daytime scene), a scene where the shooting object is translucent, or Visible light depth images can be used in scenes where the subject absorbs infrared light, such as in low-light scenes (that is, the ambient brightness value is greater than the preset brightness value, such as night scenes), scenes where the subject is a non-textured object, or the subject is repeated periodically
  • Infrared light depth data can be used in the scene of the object that appears; it can also be fused with visible light depth image or infrared light depth data to obtain a deep fusion image, which can compensate for various defects of visible light depth image and infrared light depth data, and is applicable It is suitable for almost all scenes, especially for scenes such as poor lighting conditions, non-textured objects or periodic repetitive objects, which is beneficial to improve the confidence of depth data.
  • the processor 40 may also obtain infrared light depth data based on infrared sub-images. Relative to the visible light depth image, the data can include texture information of the subject such as the object or background. For example, the processor 40 can control the TOF light source to project a light beam in a specific direction onto the object or the background, and obtain the time difference between the emission time and the return time of the echo signal of the light beam to calculate the infrared light depth data from the object to the camera.
  • the processor 40 can choose to use the visible light depth image or infrared light depth data according to the specific scene, for example, in a high-light scene (that is, the ambient brightness value is greater than a preset brightness value, such as a daytime scene), a scene where the shooting object is translucent, or Visible light depth images can be used in scenes where the subject absorbs infrared light, such as in low-light scenes (that is, the ambient brightness value is greater than the preset brightness value, such as night scenes), scenes where the subject is a textureless object, or the subject is repeated periodically
  • the infrared light depth data can be used in the scene of the object that appears; the visible light depth image or infrared light depth data can also be combined to obtain a deep fusion image, which can compensate for the defects of both the visible light depth image and the infrared light depth data. It can be applied to almost all scenes, especially suitable for scenes such as poor lighting conditions, untextured objects or periodic repetitive objects, etc., which helps to improve the confidence of depth data.
  • the first camera module in the camera assembly in the embodiments of the present disclosure can acquire the first image and the second camera module can acquire the second image, and the Bayer sub-image and the infrared sub-image can be acquired from the second image.
  • Perform image processing on at least one of Bayer sub-images and infrared sub-images and the first image such as acquiring a depth image, that is, the depth image can be obtained without setting a depth camera in the camera module array, which can reduce the size of the camera assembly , To reduce the space occupied by the electronic equipment, which is conducive to the miniaturization and cost reduction of the electronic equipment.
  • FIG. 4 is a flowchart showing an image processing method according to an exemplary embodiment.
  • an image processing method, applied to the camera assembly provided in the foregoing embodiment may include:
  • Step 41 Obtain a first image generated by the first camera module and a second image generated by the second camera module; the second image includes the Bayer sub-image generated by the second camera module sensing the first waveband light And sensing the infrared sub-images generated by the second waveband light;
  • Step 42 Perform image processing on at least one of the Bayer sub-image and the infrared sub-image and the first image.
  • step 42, performing image processing on at least one of the Bayer sub-image and the infrared sub-image and the first image may include:
  • step 42, performing image processing on at least one of the Bayer sub-image and the infrared sub-image and the first image may include: according to the Bayer sub-image and the first The image acquires a visible light depth image.
  • step 42 performing image processing on at least one of the Bayer sub-image and the infrared sub-image and the first image may include :
  • the method may further include: in response to a user's zoom operation, performing image zoom based on the first image and the Bayer sub-image.
  • the embodiment of the present disclosure also provides an image processing device, referring to FIG. 5, which may include:
  • the image acquisition module 51 is configured to acquire a first image generated by a first camera module and a second image generated by a second camera module; the second image includes the second camera module sensing first waveband light generation Bayer sub-images and infrared sub-images generated by sensing the second band of light;
  • the image processing module 52 is configured to perform image processing on at least one of the Bayer sub-image and the infrared sub-image and the first image.
  • the image processing module 52 may include:
  • the image enhancement unit is used to fuse the infrared sub-image and the first image to enhance the first image.
  • the image processing module 52 may include:
  • the depth image acquisition unit is configured to acquire a visible light depth image according to the Bayer sub-image and the first image.
  • the image processing module when the infrared light source includes a structured light source or a TOF light source, the image processing module includes:
  • the depth integration unit is used to fuse the depth data of the visible light depth image and the infrared sub-image to obtain a deep fusion image.
  • the device may further include:
  • the zoom module is configured to perform image zoom based on the first image and the Bayer sub-image in response to a user's zoom operation.
  • the device provided in the embodiment of the present disclosure corresponds to the above method embodiment, and the specific content can refer to the content of each method embodiment, which will not be repeated here.
  • Fig. 6 is a block diagram showing an electronic device according to an exemplary embodiment.
  • the electronic device 600 may be a smart phone, a computer, a digital broadcasting terminal, a tablet device, a medical device, a fitness device, a personal digital assistant, etc.
  • the electronic device 600 may include one or more of the following components: a processing component 602, a memory 604, a power supply component 606, a multimedia component 608, an audio component 610, an input/output (I/O) interface 612, a sensor component 614, The communication component 616, and the image acquisition component 618.
  • a processing component 602 a memory 604, a power supply component 606, a multimedia component 608, an audio component 610, an input/output (I/O) interface 612, a sensor component 614, The communication component 616, and the image acquisition component 618.
  • the processing component 602 generally processes the overall operations of the electronic device 600, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations.
  • the processing component 602 may include one or more processors 620 to execute computer programs.
  • the processing component 602 may include one or more modules to facilitate the interaction between the processing component 602 and other components.
  • the processing component 602 may include a multimedia module to facilitate the interaction between the multimedia component 608 and the processing component 602.
  • the memory 604 is configured to store various types of data to support operations in the electronic device 600. Examples of such data include computer programs for any application or method operating on the electronic device 600, contact data, phone book data, messages, pictures, videos, etc.
  • the memory 604 can be implemented by any type of volatile or non-volatile storage device or a combination thereof, such as static random access memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable and Programmable Read Only Memory (EPROM), Programmable Read Only Memory (PROM), Read Only Memory (ROM), Magnetic Memory, Flash Memory, Magnetic Disk or Optical Disk.
  • SRAM static random access memory
  • EEPROM electrically erasable programmable read-only memory
  • EPROM erasable and Programmable Read Only Memory
  • PROM Programmable Read Only Memory
  • ROM Read Only Memory
  • Magnetic Memory Flash Memory
  • Magnetic Disk Magnetic Disk or Optical Disk.
  • the power supply component 606 provides power for various components of the electronic device 600.
  • the power supply component 606 may include a power management system, one or more power supplies, and other components associated with the generation, management, and distribution of power for the electronic device 600.
  • the power supply component 606 may include a power supply chip, and the controller may communicate with the power supply chip, thereby controlling the power supply chip to turn on or off the switching device, so that the battery supplies power to the main board circuit or does not supply power.
  • the multimedia component 608 includes a screen that provides an output interface between the electronic device 600 and the target object.
  • the screen may include a liquid crystal display (LCD) and a touch panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from the target object.
  • the touch panel includes one or more touch sensors to sense touch, sliding, and gestures on the touch panel. The touch sensor can not only sense the boundary of the touch or slide action, but also detect the duration and pressure related to the touch or slide operation.
  • the audio component 610 is configured to output and/or input audio signals.
  • the audio component 610 includes a microphone (MIC), and when the electronic device 600 is in an operation mode, such as a call mode, a recording mode, and a voice recognition mode, the microphone is configured to receive an external audio signal.
  • the received audio signal can be further stored in the memory 604 or sent via the communication component 616.
  • the audio component 610 further includes a speaker for outputting audio signals.
  • the I/O interface 612 provides an interface between the processing component 602 and a peripheral interface module.
  • the above-mentioned peripheral interface module may be a keyboard, a click wheel, a button, and the like.
  • the sensor component 614 includes one or more sensors for providing the electronic device 600 with various aspects of state evaluation.
  • the sensor component 614 can detect the on/off status of the electronic device 600, the relative positioning of components, such as the display screen and the keypad of the electronic device 600, and the sensor component 614 can also detect the position change of the electronic device 600 or a component. , The presence or absence of contact between the target object and the electronic device 600, the orientation or acceleration/deceleration of the electronic device 600, and the temperature change of the electronic device 600.
  • the communication component 616 is configured to facilitate wired or wireless communication between the electronic device 600 and other devices.
  • the electronic device 600 can access a wireless network based on a communication standard, such as WiFi, 2G, or 3G, or a combination thereof.
  • the communication component 616 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel.
  • the communication component 616 also includes a near field communication (NFC) module to facilitate short-range communication.
  • the NFC module can be implemented based on radio frequency identification (RFID) technology, infrared data association (IrDA) technology, ultra-wideband (UWB) technology, Bluetooth (BT) technology and other technologies.
  • RFID radio frequency identification
  • IrDA infrared data association
  • UWB ultra-wideband
  • Bluetooth Bluetooth
  • the image capture component 618 is configured to capture images.
  • the image acquisition component 618 can be implemented by using the camera component provided in the foregoing embodiment.
  • the electronic device 600 may be implemented by one or more application-specific integrated circuits (ASIC), digital signal processors (DSP), digital signal processing devices (DSPD), programmable logic devices (PLD), field-accessible Program gate array (FPGA), controller, microcontroller, microprocessor or other electronic components to achieve.
  • ASIC application-specific integrated circuits
  • DSP digital signal processors
  • DSPD digital signal processing devices
  • PLD programmable logic devices
  • FPGA field-accessible Program gate array
  • controller microcontroller, microprocessor or other electronic components to achieve.
  • a non-transitory readable storage medium including an executable computer program, such as a memory 604 including instructions, and the foregoing executable computer program can be executed by a processor.
  • the readable storage medium may be ROM, random access memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.

Abstract

本公开是关于一种图像处理方法及装置、相机组件、电子设备、存储介质。该组件包括:感测第一波段光线产生成第一图像的第一相机模组、感测第一波段光线和第二波段光线并生成第二图像的第二相机模组、发射第二波段光线的红外光源和处理器;第二图像包括拜耳子图像和所述红外子图像;处理器分别与第一相机模组、第二相机模组和红外光源连接;处理器,用于对拜耳子图像和红外子图像中的至少一种与第一图像进行图像处理。本实施例中无需在相机模组阵列中设置深度相机即可获取到深度图像,可以缩小相机组件的体积,减小其在电子设备中所占的空间,有利于电子设备的小型化和降低成本。

Description

图像处理方法及装置、相机组件、电子设备、存储介质 技术领域
本公开涉及图像处理技术领域,尤其涉及一种图像处理方法及装置、相机组件、电子设备、存储介质。
背景技术
传统相机可以用于录像或拍照,采集场景的亮度信息和色彩信息,而不能采集深度信息。随着应用需求的增加,目前部分电子设备的相机中增设深度相机形成相机阵列,其中深度相机可以包括阵列摄像头模组、结构光模组和TOF(Time-of-fly)模组,根据各模组的工作原理可以得到深度信息。然而,上述相机阵列需要单独设置深度相机,占用电子设备宝贵的空间,不利于电子设备的小型化和降低成本。
发明内容
有鉴于此,本公开提供一种图像处理方法及装置、相机组件、电子设备、存储介质。
根据本公开实施例的第一方面,提供一种相机组件,包括:感测第一波段光线的第一相机模组、感测第一波段光线和第二波段光线的第二相机模组、发射第二波段光线的红外光源和处理器;所述处理器分别与所述第一相机模组、所述第二相机模组和所述红外光源连接;
所述第一相机模组,用于在所述处理器的控制下生成第一图像;
所述红外光源,用于在所述处理器的控制下发射第二波段光线;
所述第二相机模组,用于在所述处理器的控制下生成第二图像;所述第二图像包括感测第一波段光线生成的拜耳子图像和感测第二波段光线生成的红外子图像;
所述处理器,还用于对所述拜耳子图像和所述红外子图像中的至少一 种与所述第一图像进行图像处理。
可选地,所述红外光源包括以下至少一种:红外泛光光源、结构光光源或TOF光源。
可选地,所述第一相机模组和所述第二相机模组中镜头的视场角不同。
根据本公开实施例的第二方面,提供一种图像处理方法,包括:
获取第一相机模组生成的第一图像和第二相机模组生成的第二图像;所述第二图像包括所述第二相机模组感测第一波段光线生成的拜耳子图像和感测第二波段光线生成的红外子图像;
对所述拜耳子图像和所述红外子图像中的至少一种与所述第一图像进行图像处理。
可选地,对所述拜耳子图像和所述红外子图像中的至少一种与所述第一图像进行图像处理,包括:
融合所述红外子图像和所述第一图像,以增强所述第一图像。
可选地,对所述拜耳子图像和所述红外子图像中的至少一种与所述第一图像进行图像处理,包括:
根据所述拜耳子图像和所述第一图像获取可见光深度图像。
可选地,当红外光源包括结构光光源或TOF光源时,对所述拜耳子图像和所述红外子图像中的至少一种与所述第一图像进行图像处理,包括:
融合所述可见光深度图像和所述红外子图像的深度数据,获得深度融合图像。
可选地,所述方法还包括:
响应于用户的变焦操作,基于所述第一图像和所述拜耳子图像进行图像变焦。
根据本公开实施例的第三方面,提供一种图像处理装置,包括:
图像获取模块,用于获取第一相机模组生成的第一图像和第二相机模组生成的第二图像;所述第二图像包括所述第二相机模组感测第一波段光线生成的拜耳子图像和感测第二波段光线生成的红外子图像;
图像处理模块,用于对所述拜耳子图像和所述红外子图像中的至少一种与所述第一图像进行图像处理。
可选地,所述图像处理模块包括:
图像增强单元,用于融合所述红外子图像和所述第一图像,以增强所述第一图像。
可选地,所述图像处理模块包括:
深度图像获取单元,用于根据所述拜耳子图像和所述第一图像获取可见光深度图像。
可选地,当红外光源包括结构光光源或TOF光源时,所述图像处理模块包括:
深度整合单元,用于融合所述可见光深度图像和所述红外子图像的深度数据,获得深度融合图像。
可选地,所述装置还包括:
变焦模块,用于响应于用户的变焦操作,基于所述第一图像和所述拜耳子图像进行图像变焦。
根据本公开实施例的第四方面,提供一种电子设备,包括:
上述相机组件;
处理器;
用于存储所述处理器可执行的计算机程序的存储器;
所述处理器被配置为执行所述存储器中的计算机程序以实现上述任一项所述方法的步骤。
根据本公开实施例的第五方面,提供一种可读存储介质,其上存储有可执行的计算机程序,该计算机程序被执行时实现上述任一项所述方法的步骤。
本公开的实施例提供的技术方案可以包括以下有益效果:
由上述实施例可知,本公开实施例中相机组件中的第一相机模组可以采集第一图像以及第二相机模组获取第二图像,可以从第二图像中获取到 拜耳子图像和红外子图像,这样可以对拜耳子图像和红外子图像中的至少一种和第一图像进行图像处理,例如获取深度图像,即无需在相机模组阵列中设置深度相机即可获取到深度图像,可以缩小相机组件的体积,减小其在电子设备中所占的空间,有利于电子设备的小型化和降低成本。
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,并不能限制本公开。
附图说明
为此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本公开的实施例,并与说明书一起用于解释本公开的原理。
图1是根据一示例性实施例示出的一种相机组件的流程图。
图2是根据一示例性实施例示出的一种应用场景图。
图3是根据一示例性实施例示出的获取可见光深度图像的示意图。
图4是根据一示例性实施例示出的一种深度数据获取方法的流程图。
图5是根据一示例性实施例示出的一种深度数据获取装置的框图。
图6是根据一示例性实施例示出的一种电子设备的框图。
具体实施方式
这里将详细地对示例性实施例进行说明,其示例表示在附图中。下面的描述涉及附图时,除非另有表示,不同附图中的相同数字表示相同或相似的要素。以下示例性所描述的实施例并不代表与本公开相一致的所有实施例。相反,它们仅是与如所附权利要求书中所详述的、本公开的一些方面相一致的装置例子。
传统相机可以用于录像或拍照,采集场景的亮度信息和色彩信息,而不能采集深度信息。随着应用需求的增加,目前部分电子设备的相机中增设深度相机形成相机阵列,其中深度相机可以包括阵列摄像头模组、结构 光模组和TOF(Time-of-flight)模组,根据各模组的工作原理可以得到深度信息。然而,上述相机阵列需要单独设置深度相机,占用电子设备宝贵的空间,不利于电子设备的小型化和降低成本。
为解决上述技术问题,本公开实施例提供了一种图像处理方法及装置、相机组件、电子设备、存储介质,其发明构思在于,在相机模组阵列中设置用于感测第一波段光线并获取第一图像的第一相机模组,以及设置用于感测第一波段光线和第二波段光线并获取第二图像的第二相机模组;处理器可以对第二图像中拜耳子图像和红外子图像中的至少一种与第一图像进行图像处理,例如获取深度图像。
图1是根据一示例性实施例示出的一种相机组件的框图。参见图1,一种相机组件,可以包括:第一相机模组10、第二相机模组20、红外光源30和处理器40;第一相机模组10可以感测第一波段光线,第二相机模组20可以感测第一波段光线和第二波段光线。其中处理器40分别与第一相机模组10、第二相机模组20和红外光源30连接。其中连接是指处理器40可以发送控制指令,以及从相机模组(10,20)获取图像,具体实现可以是通信总线、缓存、或者无线,在此不作限定。
第一相机模组10,用于在处理器40的控制下生成第一图像;该第一图像可以为RGB图像。
红外光源30,用于在处理器40的控制下发射第二波段光线。
第二相机模组20,用于在处理器40的控制下生成第二图像;该第二图像可以包括感测第一波段光线生成的拜耳子图像和感测第二波段光线生成的红外子图像。
处理器40,用于根据所述第二图像获取拜耳子图像和红外子图像,对拜耳子图像和红外子图像中的至少一种和第一图像进行图像处理。
示例地,本实施例中,该第一波段光线可以为可见光波段的光线,第二波段光线可以为红外光波段的光线。
本实施例中,第一相机模组10可以包括响应第一波段光线(如可见光 波段)的图像传感器、镜头、红外光滤波片等器件,以及可以还包括音圈马达、线路基板等器件。其中,图像传感器可以采用电荷耦合器件(Charge-coupled Device,CCD)或互补金属氧化物半导体(Complementary Metal Oxide Semiconductor,CMOS)等实现响应第一波段光线;为利于第一相机模组10和第二相机模组20的分工使用,该第一相机模组10的图像传感器中,滤波片可以是bayer模板、CYYM模板、CYGM模板等仅响应可见光波段的彩色滤波阵列。第一相机模组10的各器件的安装位置和工作原理可以参考相关技术,在此不再赘述。
本实施例中,第二相机模组20可以包括同时响应第一波段光线(如可见光波段的光线)和第二波段光线(如红外光波段的光线)的图像传感器、镜头、可见光-近红外光带通滤波片等器件,以及可以还包括音圈马达、线路基板等器件。其中,同时响应第一波段光线和第二波段光线的图像传感器可以采用CCD或CMOS等实现,其滤波片可以是RGBIR模板、RGBW模板等同时响应可见光波段和红外光波段的彩色滤波阵列。第二相机模组20的各器件的安装位置和工作原理可以参考相关技术,在此不再赘述。
本实施例中,红外光源30可以包括以下至少一种:红外泛光光源、结构光光源或TOF(Time of Flight,飞行时间)光源。其中,红外泛光光源的工作原理是对取景范围内物体增加红外照明亮度的光源;结构光光源的工作原理是投射特定的光信息到物体表面后及背景后,根据物体造成的光信号的变化来计算物体的位置和深度等信息;TOF光源的工作原理是向取景范围内投射红外脉冲,根据红外脉冲往返时间来计算物体的距离。
本实施例中,处理器40可以采用单独设置的微处理器实现,也可以采用设置有上述相机组件的电子设备的处理器实现。该处理器40其具有以下2方面的功能:
第一,可以接收按键、麦克风、图像传感器中一个或者组合的操作信号,来控制第一相机模组10、第二相机模组20和红外光源30。例如,当电子设备处于正常拍照模式下时,处理器40可以调整第一相机模组10的 焦距、亮度等参数,检测到用户按压快门的动作时可以控制第一相机模组10拍摄图像。当电子设备处于全景、HDR(High-Dynamic Range,高动态范围图像)、全焦等模式下或者低光场景下时,处理器40可以开启红外光源30,同时调整第一相机模组10和第二相机模组20的焦距、亮度等参数,检测到用户按压快门的动作时控制第一相机模组10拍摄图像得到第一图像,控制第二相机模组20拍摄图像得到第二图像。
第二,在需要用到深度图像时,如图2所示,电子设备可以处理第一图像和第二图像获取到可见光深度图像或者深度融合图像。
在一示例中,处理器可以从第二图像中提取出对应于第一波段光线的拜耳(bayer)子图像,并根据第一图像和拜耳子图像计算出可见光深度图像,计算过程如下:
参见图3,P是取景范围内待测物体(即拍摄对象)的某一点,CR和CL分别是第一相机和第二相机的光心,点P在两个相机感光器上的成像点分别为PR和PL(相机的成像平面经过旋转后放在了镜头前方),f为相机焦距,B为两相机中心距,Z为待检测的深度,设点PR到点PL的距离为D,则:
D=B-(XR-XL);
根据相似三角形原理:
[B-(XR-XL)]/B=(Z-F)/Z;
可得:
Z=fB/(XR-XL);
由于焦距f、相机中心距B、P点在右像平面坐标XR和P点在左像平面坐标XL可通过标定得到,因此需要获得(XR-XL)即可得深度。其中,f、B、XR、XL可以通过标定、校准和匹配工作确定,标定、校准和匹配工作可以参考相关技术,在此不再赘述。
处理器40可以重复上述步骤获得第一图像中所有像素点的深度,得到可见光深度图像。该可见光深度图像可以用于大光圈、人脸/虹膜的解锁、 人脸/虹膜的支付、3D美颜、影棚光效和Animoji等业务场景中。
在一示例中,第一相机模组10和第二相机模组20中各相机的视场角可以不同,两者的大小关系不作限定。此情况下,处理器40可以结合两个相机的视场角,从第一图像和第二图像裁切相应尺寸大小的图像,例如,从第二图像中提取出的拜耳子图像中裁切一帧尺寸较大的图像,再从第一图像中裁切出一帧尺寸较小的图像,也即从第二图像的拜耳子图像中裁切出的图像大于从第一图像中裁切出的图像,然后先后显示,这样可以达到变焦的效果,即本实施例中可以达到类似光学变焦的拍摄效果,有利于提升拍摄体验。
在另一示例中,考虑到第二图像中还包括红外信息,处理器40可以从第二图像中提取出感测第二波段光线生成的红外子图像,由于红外子图像频域中的高频信息比第一图像的频域中的信息丰富,这样可以融合红外子图像和第一图像,例如提取红外子图像的高频信息添加到第一图像的频域中,达到增加第一图像的效果,可以使融合后的第一图像的细节更丰富、清晰度更高、色彩更准确。另外,红外子图像还可以用于电子设备中的生物识别功能,例如指纹解锁、人脸识别等场景。
在又一示例中,考虑到红外光源可以是结构光光源,继续参见图2,在红外光源包括结构光光源的情况下,处理器40还可以基于红外子图像获取到红外光深度数据。例如,处理器40可以控制红外光源30将特定方向的光束投射到物体或背景等拍摄对象之上,并获取光束的回波信号的强度或者光斑大小等参数;基于预设的参数和距离的对应关系,处理器40可以得到拍摄对象到相机的红外光深度数据,该红外光深度数据相对于可见光深度图像,可以包括物体或背景等拍摄对象的纹理信息。此情况下,处理器40可以根据具体场景选择使用可见光深度图像或者红外光深度数据,例如在高光场景(即环境亮度值大于预设亮度值,如白天场景)、拍摄对象为半透明的场景或者拍摄对象吸收红外光的场景下可以使用可见光深度图像,例如在低光场景(即环境亮度值大于预设亮度值,如晚上场景)、拍摄对象 为无纹理物体的场景或者拍摄对象为呈周期重复出现的物体的场景下可以使用红外光深度数据;也可以融合可见光深度图像或者红外光深度数据得到深度融合图像,该深度融合图像能够弥补可见光深度图像和红外光深度数据各种的缺陷,可以适用于几乎所有场景,尤其适用于光照条件不佳、无纹理物体或周期重复物体等场景,有利于提升深度数据的置信度。
在又一示例中,考虑到红外光源可以是TOF光源,继续参见图2,在红外光源包括TOF光源的情况下,处理器40还可以基于红外子图像获取到红外光深度数据,该红外光深度数据相对于可见光深度图像,可以包括物体或背景等拍摄对象的纹理信息。例如,处理器40可以控制TOF光源将特定方向的光束投射到物体或背景之上,并获取光束的回波信号的发射时间和返回时间的时间差计算出物体到相机的红外光深度数据。此情况下,处理器40可以根据具体场景选择使用可见光深度图像或者红外光深度数据,例如在高光场景(即环境亮度值大于预设亮度值,如白天场景)、拍摄对象为半透明的场景或者拍摄对象吸收红外光的场景下可以使用可见光深度图像,例如在低光场景(即环境亮度值大于预设亮度值,如晚上场景)、拍摄对象为无纹理物体的场景或者拍摄对象为呈周期重复出现的物体的场景下可以使用红外光深度数据;也可以融合可见光深度图像或者红外光深度数据得到深度融合图像,该深度融合图像能够弥补可见光深度图像和红外光深度数据两者所存在的缺陷,可以适用于几乎所有场景,尤其适用于光照条件不佳、拍摄对象为无纹理物体或周期重复物体等场景,有利于提升深度数据的置信度。
需要说明的是,本实施例通过选用结构光光源或者TOF光源,不涉及到对相机模组的改进或者增加,设计难度会大大降低。
至此,本公开实施例中相机组件中的第一相机模组可以采集第一图像以及第二相机模组获取第二图像,可以从第二图像中获取到拜耳子图像和红外子图像,这样可以对拜耳子图像和红外子图像中的至少一种和第一图像进行图像处理,例如获取深度图像,即无需在相机模组阵列中设置深度 相机即可获取到深度图像,可以缩小相机组件的体积,减小其在电子设备中所占的空间,有利于电子设备的小型化和降低成本。
本公开实施例还提供了一种图像处理方法,图4是根据一示例性实施例示出的一种图像处理方法的流程图。参见图4,一种图像处理方法,应用于上述实施例提供的相机组件,可以包括:
步骤41,获取第一相机模组生成的第一图像和第二相机模组生成的第二图像;所述第二图像包括所述第二相机模组感测第一波段光线生成的拜耳子图像和感测第二波段光线生成的红外子图像;
步骤42,对所述拜耳子图像和所述红外子图像中的至少一种与所述第一图像进行图像处理。
在一实施例中,步骤42,对所述拜耳子图像和所述红外子图像中的至少一种与所述第一图像进行图像处理,可以包括:
融合所述红外子图像和所述第一图像,以增强所述第一图像。
在一实施例中,步骤42,对所述拜耳子图像和所述红外子图像中的至少一种与所述第一图像进行图像处理,可以包括:根据所述拜耳子图像和所述第一图像获取可见光深度图像。
在一实施例中,当红外光源包括结构光光源或TOF光源时,步骤42,对所述拜耳子图像和所述红外子图像中的至少一种与所述第一图像进行图像处理,可以包括:
融合所述可见光深度图像和所述红外子图像的深度数据,获得深度融合图像。
在一实施例中,所述方法还可以包括:响应于用户的变焦操作,基于所述第一图像和所述拜耳子图像进行图像变焦。
可理解的是,本公开实施例提供的方法与上述相机组件的工作过程相匹配,具体内容可以参考相机组件各实施例的内容,在此不再赘述。
本公开实施例还提供了一种图像处理装置,参见图5,可以包括:
图像获取模块51,用于获取第一相机模组生成的第一图像和第二相机 模组生成的第二图像;所述第二图像包括所述第二相机模组感测第一波段光线生成的拜耳子图像和感测第二波段光线生成的红外子图像;
图像处理模块52,用于对所述拜耳子图像和所述红外子图像中的至少一种与所述第一图像进行图像处理。
在一实施例中,所述图像处理模块52可以包括:
图像增强单元,用于融合所述红外子图像和所述第一图像,以增强所述第一图像。
在一实施例中,所述图像处理模块52可以包括:
深度图像获取单元,用于根据所述拜耳子图像和所述第一图像获取可见光深度图像。
在一实施例中,当红外光源包括结构光光源或TOF光源时,所述图像处理模块包括:
深度整合单元,用于融合所述可见光深度图像和所述红外子图像的深度数据,获得深度融合图像。
在一实施例中,所述装置还可以包括:
变焦模块,用于响应于用户的变焦操作,基于所述第一图像和所述拜耳子图像进行图像变焦。
可理解的是,本公开实施例提供的装置与上述方法实施例相对应,具体内容可以参考方法各实施例的内容,在此不再赘述。
图6是根据一示例性实施例示出的一种电子设备的框图。例如,电子设备600可以是智能手机,计算机,数字广播终端,平板设备,医疗设备,健身设备,个人数字助理等。
参照图6,电子设备600可以包括以下一个或多个组件:处理组件602,存储器604,电源组件606,多媒体组件608,音频组件610,输入/输出(I/O)接口612,传感器组件614,通信组件616,以及图像采集组件618。
处理组件602通常处理电子设备600的整体操作,诸如与显示,电话呼叫,数据通信,相机操作和记录操作相关联的操作。处理组件602可以 包括一个或多个处理器620来执行计算机程序。此外,处理组件602可以包括一个或多个模块,便于处理组件602和其他组件之间的交互。例如,处理组件602可以包括多媒体模块,以方便多媒体组件608和处理组件602之间的交互。
存储器604被配置为存储各种类型的数据以支持在电子设备600的操作。这些数据的示例包括用于在电子设备600上操作的任何应用程序或方法的计算机程序,联系人数据,电话簿数据,消息,图片,视频等。存储器604可以由任何类型的易失性或非易失性存储设备或者它们的组合实现,如静态随机存取存储器(SRAM),电可擦除可编程只读存储器(EEPROM),可擦除可编程只读存储器(EPROM),可编程只读存储器(PROM),只读存储器(ROM),磁存储器,快闪存储器,磁盘或光盘。
电源组件606为电子设备600的各种组件提供电力。电源组件606可以包括电源管理系统,一个或多个电源,及其他与为电子设备600生成、管理和分配电力相关联的组件。电源组件606可以包括电源芯片,控制器可以电源芯片通信,从而控制电源芯片导通或者断开开关器件,使电池向主板电路供电或者不供电。
多媒体组件608包括在电子设备600和目标对象之间的提供一个输出接口的屏幕。在一些实施例中,屏幕可以包括液晶显示屏(LCD)和触摸面板(TP)。如果屏幕包括触摸面板,屏幕可以被实现为触摸屏,以接收来自目标对象的输入信号。触摸面板包括一个或多个触摸传感器以感测触摸、滑动和触摸面板上的手势。触摸传感器可以不仅感测触摸或滑动动作的边界,而且还检测与触摸或滑动操作相关的持续时间和压力。
音频组件610被配置为输出和/或输入音频信号。例如,音频组件610包括一个麦克风(MIC),当电子设备600处于操作模式,如呼叫模式、记录模式和语音识别模式时,麦克风被配置为接收外部音频信号。所接收的音频信号可以被进一步存储在存储器604或经由通信组件616发送。在一些实施例中,音频组件610还包括一个扬声器,用于输出音频信号。
I/O接口612为处理组件602和外围接口模块之间提供接口,上述外围接口模块可以是键盘,点击轮,按钮等。
传感器组件614包括一个或多个传感器,用于为电子设备600提供各个方面的状态评估。例如,传感器组件614可以检测到电子设备600的打开/关闭状态,组件的相对定位,例如组件为电子设备600的显示屏和小键盘,传感器组件614还可以检测电子设备600或一个组件的位置改变,目标对象与电子设备600接触的存在或不存在,电子设备600方位或加速/减速和电子设备600的温度变化。
通信组件616被配置为便于电子设备600和其他设备之间有线或无线方式的通信。电子设备600可以接入基于通信标准的无线网络,如WiFi,2G或3G,或它们的组合。在一个示例性实施例中,通信组件616经由广播信道接收来自外部广播管理系统的广播信号或广播相关信息。在一个示例性实施例中,通信组件616还包括近场通信(NFC)模块,以促进短程通信。例如,在NFC模块可基于射频识别(RFID)技术,红外数据协会(IrDA)技术,超宽带(UWB)技术,蓝牙(BT)技术和其他技术来实现。
图像采集组件618被配置为采集图像。例如,图像采集组件618可以采用上述实施例提供的相机组件实现。
在示例性实施例中,电子设备600可以被一个或多个应用专用集成电路(ASIC)、数字信号处理器(DSP)、数字信号处理设备(DSPD)、可编程逻辑器件(PLD)、现场可编程门阵列(FPGA)、控制器、微控制器、微处理器或其他电子元件实现。
在示例性实施例中,还提供了一种包括可执行的计算机程序的非临时性可读存储介质,例如包括指令的存储器604,上述可执行的计算机程序可由处理器执行。其中,可读存储介质可以是ROM、随机存取存储器(RAM)、CD-ROM、磁带、软盘和光数据存储设备等。
本领域技术人员在考虑说明书及实践这里公开的公开后,将容易想到 本公开的其它实施方案。本公开旨在涵盖任何变型、用途或者适应性变化,这些变型、用途或者适应性变化遵循本公开的一般性原理并包括本公开未公开的本技术领域中的公知常识或惯用技术手段。说明书和实施例仅被视为示例性的,本公开的真正范围和精神由下面的权利要求指出。
应当理解的是,本公开并不局限于上面已经描述并在附图中示出的精确结构,并且可以在不脱离其范围进行各种修改和改变。本公开的范围仅由所附的权利要求来限制。

Claims (15)

  1. 一种相机组件,其特征在于,包括:感测第一波段光线的第一相机模组、感测第一波段光线和第二波段光线的第二相机模组、发射第二波段光线的红外光源和处理器;所述处理器分别与所述第一相机模组、所述第二相机模组和所述红外光源连接;
    所述第一相机模组,用于在所述处理器的控制下生成第一图像;
    所述红外光源,用于在所述处理器的控制下发射第二波段光线;
    所述第二相机模组,用于在所述处理器的控制下生成第二图像;所述第二图像包括感测第一波段光线生成的拜耳子图像和感测第二波段光线生成的红外子图像;
    所述处理器,还用于对所述拜耳子图像和所述红外子图像中的至少一种与所述第一图像进行图像处理。
  2. 根据权利要求1所述的相机组件,其特征在于,所述红外光源包括以下至少一种:红外泛光光源、结构光光源或TOF光源。
  3. 根据权利要求1所述的相机组件,其特征在于,所述第一相机模组和所述第二相机模组中镜头的视场角不同。
  4. 一种图像处理方法,其特征在于,包括:
    获取第一相机模组生成的第一图像和第二相机模组生成的第二图像;所述第二图像包括所述第二相机模组感测第一波段光线生成的拜耳子图像和感测第二波段光线生成的红外子图像;
    对所述拜耳子图像和所述红外子图像中的至少一种与所述第一图像进行图像处理。
  5. 根据权利要求4所述的图像处理方法,其特征在于,对所述拜耳子图像和所述红外子图像中的至少一种与所述第一图像进行图像处理,包括:
    融合所述红外子图像和所述第一图像,以增强所述第一图像。
  6. 根据权利要求4所述的图像处理方法,其特征在于,对所述拜耳子图像和所述红外子图像中的至少一种与所述第一图像进行图像处理,包括:
    根据所述拜耳子图像和所述第一图像获取可见光深度图像。
  7. 根据权利要求4所述的图像处理方法,其特征在于,当红外光源包括结构光光源或TOF光源时,对所述拜耳子图像和所述红外子图像中的至少一种与所述第一图像进行图像处理,包括:
    融合所述可见光深度图像和所述红外子图像的深度数据,获得深度融合图像。
  8. 根据权利要求4所述的图像处理方法,其特征在于,所述方法还包括:
    响应于用户的变焦操作,基于所述第一图像和所述拜耳子图像进行图像变焦。
  9. 一种图像处理装置,其特征在于,包括:
    图像获取模块,用于获取第一相机模组生成的第一图像和第二相机模组生成的第二图像;所述第二图像包括所述第二相机模组感测第一波段光线生成的拜耳子图像和感测第二波段光线生成的红外子图像;
    图像处理模块,用于对所述拜耳子图像和所述红外子图像中的至少一种与所述第一图像进行图像处理。
  10. 根据权利要求9所述的图像处理装置,其特征在于,所述图像处理模块包括:
    图像增强单元,用于融合所述红外子图像和所述第一图像,以增强所述第一图像。
  11. 根据权利要求9所述的图像处理装置,其特征在于,所述图像处理模块包括:
    深度图像获取单元,用于根据所述拜耳子图像和所述第一图像获取可见光深度图像。
  12. 根据权利要求9所述的图像处理装置,其特征在于,当红外光源包括结构光光源或TOF光源时,所述图像处理模块包括:
    深度整合单元,用于融合所述可见光深度图像和所述红外子图像的深 度数据,获得深度融合图像。
  13. 根据权利要求9所述的图像处理装置,其特征在于,所述装置还包括:
    变焦模块,用于响应于用户的变焦操作,基于所述第一图像和所述拜耳子图像进行图像变焦。
  14. 一种电子设备,其特征在于,包括:
    相机组件;
    处理器;
    用于存储所述处理器可执行的计算机程序的存储器;
    所述处理器被配置为执行所述存储器中的计算机程序以实现权利要求4~8任一项所述方法的步骤。
  15. 一种可读存储介质,其上存储有可执行的计算机程序,其特征在于,该计算机程序被执行时实现权利要求4~8任一项所述方法的步骤。
PCT/CN2020/092507 2020-05-27 2020-05-27 图像处理方法及装置、相机组件、电子设备、存储介质 WO2021237493A1 (zh)

Priority Applications (6)

Application Number Priority Date Filing Date Title
US17/274,044 US20230076534A1 (en) 2020-05-27 2020-05-27 Image processing method and device, camera component, electronic device and storage medium
JP2020562134A JP7321187B2 (ja) 2020-05-27 2020-05-27 画像処理方法及び装置、カメラアセンブリ、電子機器、記憶媒体
EP20824074.7A EP3941042A4 (en) 2020-05-27 2020-05-27 IMAGE PROCESSING METHOD AND APPARATUS, CAMERA COMPONENT, ELECTRONIC DEVICE AND STORAGE MEDIA
CN202080001848.XA CN114073063B (zh) 2020-05-27 2020-05-27 图像处理方法及装置、相机组件、电子设备、存储介质
KR1020217006592A KR102458470B1 (ko) 2020-05-27 2020-05-27 이미지 처리 방법 및 장치, 카메라 컴포넌트, 전자 기기, 저장 매체
PCT/CN2020/092507 WO2021237493A1 (zh) 2020-05-27 2020-05-27 图像处理方法及装置、相机组件、电子设备、存储介质

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/092507 WO2021237493A1 (zh) 2020-05-27 2020-05-27 图像处理方法及装置、相机组件、电子设备、存储介质

Publications (1)

Publication Number Publication Date
WO2021237493A1 true WO2021237493A1 (zh) 2021-12-02

Family

ID=78745233

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/092507 WO2021237493A1 (zh) 2020-05-27 2020-05-27 图像处理方法及装置、相机组件、电子设备、存储介质

Country Status (6)

Country Link
US (1) US20230076534A1 (zh)
EP (1) EP3941042A4 (zh)
JP (1) JP7321187B2 (zh)
KR (1) KR102458470B1 (zh)
CN (1) CN114073063B (zh)
WO (1) WO2021237493A1 (zh)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120327195A1 (en) * 2011-06-24 2012-12-27 Mstar Semiconductor, Inc. Auto Focusing Method and Apparatus
US20150009295A1 (en) * 2013-07-03 2015-01-08 Electronics And Telecommunications Research Institute Three-dimensional image acquisition apparatus and image processing method using the same
CN106534633A (zh) * 2016-10-27 2017-03-22 深圳奥比中光科技有限公司 一种组合摄像系统、移动终端及图像处理方法
CN107172407A (zh) * 2016-03-08 2017-09-15 聚晶半导体股份有限公司 适于产生深度图的电子装置与方法
CN107395974A (zh) * 2017-08-09 2017-11-24 广东欧珀移动通信有限公司 图像处理系统及方法
CN108234984A (zh) * 2018-03-15 2018-06-29 百度在线网络技术(北京)有限公司 双目深度相机系统和深度图像生成方法

Family Cites Families (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10831093B1 (en) * 2008-05-19 2020-11-10 Spatial Cam Llc Focus control for a plurality of cameras in a smartphone
US8194149B2 (en) * 2009-06-30 2012-06-05 Cisco Technology, Inc. Infrared-aided depth estimation
KR20140125984A (ko) * 2013-04-19 2014-10-30 삼성전자주식회사 영상 처리 방법 및 이를 지원하는 전자 장치와 시스템
JP2016082390A (ja) * 2014-10-16 2016-05-16 ソニー株式会社 信号処理装置
US9674504B1 (en) 2015-12-22 2017-06-06 Aquifi, Inc. Depth perceptive trinocular camera system
TWI590659B (zh) * 2016-05-25 2017-07-01 宏碁股份有限公司 影像處理方法及攝像裝置
DE112017005193T5 (de) * 2016-10-14 2019-07-04 Mitsubishi Electric Corporation Bildverarbeitungsvorrichtung, Bildverarbeitungsverfahren und Bildaufnahmevorrichtung
CN106780392B (zh) * 2016-12-27 2020-10-02 浙江大华技术股份有限公司 一种图像融合方法及装置
CN106982327B (zh) * 2017-03-31 2020-02-28 北京小米移动软件有限公司 图像处理方法和装置
CN108093240A (zh) 2017-12-22 2018-05-29 成都先锋材料有限公司 3d深度图获取方法及装置
CN108259722A (zh) * 2018-02-27 2018-07-06 厦门美图移动科技有限公司 成像方法、装置及电子设备
US10771766B2 (en) * 2018-03-30 2020-09-08 Mediatek Inc. Method and apparatus for active stereo vision
CN110349196B (zh) * 2018-04-03 2024-03-29 联发科技股份有限公司 深度融合的方法和装置
WO2020015821A1 (en) * 2018-07-17 2020-01-23 Vestel Elektronik Sanayi Ve Ticaret A.S. A device having exactly two cameras and a method of generating two images using the device
JP7191597B2 (ja) * 2018-08-30 2022-12-19 キヤノン株式会社 撮像装置及びそれを備える監視システム、制御方法並びにプログラム
CN109544618B (zh) * 2018-10-30 2022-10-25 荣耀终端有限公司 一种获取深度信息的方法及电子设备
CN110246108B (zh) * 2018-11-21 2023-06-20 浙江大华技术股份有限公司 一种图像处理方法、装置及计算机可读存储介质
CN110213501A (zh) * 2019-06-25 2019-09-06 浙江大华技术股份有限公司 一种抓拍方法、装置、电子设备及存储介质
CN111062378B (zh) * 2019-12-23 2021-01-26 重庆紫光华山智安科技有限公司 图像处理方法、模型训练方法、目标检测方法及相关装置
CN114125193A (zh) * 2020-08-31 2022-03-01 安霸国际有限合伙企业 使用具有结构光的rgb-ir传感器得到无污染视频流的计时机构

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120327195A1 (en) * 2011-06-24 2012-12-27 Mstar Semiconductor, Inc. Auto Focusing Method and Apparatus
US20150009295A1 (en) * 2013-07-03 2015-01-08 Electronics And Telecommunications Research Institute Three-dimensional image acquisition apparatus and image processing method using the same
CN107172407A (zh) * 2016-03-08 2017-09-15 聚晶半导体股份有限公司 适于产生深度图的电子装置与方法
CN106534633A (zh) * 2016-10-27 2017-03-22 深圳奥比中光科技有限公司 一种组合摄像系统、移动终端及图像处理方法
CN107395974A (zh) * 2017-08-09 2017-11-24 广东欧珀移动通信有限公司 图像处理系统及方法
CN108234984A (zh) * 2018-03-15 2018-06-29 百度在线网络技术(北京)有限公司 双目深度相机系统和深度图像生成方法

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3941042A4 *

Also Published As

Publication number Publication date
JP7321187B2 (ja) 2023-08-04
CN114073063A (zh) 2022-02-18
EP3941042A1 (en) 2022-01-19
CN114073063B (zh) 2024-02-13
US20230076534A1 (en) 2023-03-09
KR102458470B1 (ko) 2022-10-25
EP3941042A4 (en) 2022-01-19
KR20210149018A (ko) 2021-12-08
JP2022538947A (ja) 2022-09-07

Similar Documents

Publication Publication Date Title
CN110505411B (zh) 图像拍摄方法、装置、存储介质及电子设备
CN106878605B (zh) 一种基于电子设备的图像生成的方法和电子设备
CN114092364B (zh) 图像处理方法及其相关设备
US9300858B2 (en) Control device and storage medium for controlling capture of images
JPWO2009139154A1 (ja) 撮像装置及び撮像方法
CN110169042B (zh) 用于使相机闪光与传感器消隐同步的方法和设备
CN108040204B (zh) 一种基于多摄像头的图像拍摄方法、装置及存储介质
CN113810590A (zh) 图像处理方法、电子设备、介质和系统
CN106982327B (zh) 图像处理方法和装置
WO2021185374A1 (zh) 一种拍摄图像的方法及电子设备
WO2021237493A1 (zh) 图像处理方法及装置、相机组件、电子设备、存储介质
EP2658245B1 (en) System and method of adjusting camera image data
KR102512787B1 (ko) 촬영 프리뷰 이미지를 표시하는 방법, 장치 및 매체
US20190052815A1 (en) Dual-camera image pick-up apparatus and image capturing method thereof
US20230058472A1 (en) Sensor prioritization for composite image capture
CN114286072A (zh) 色彩还原装置及方法、图像处理器
CN109600547B (zh) 拍照方法、装置、电子设备和存储介质
CN114765654B (zh) 一种拍摄组件、终端设备、拍摄方法、拍摄装置
CA2776009C (en) System and method of adjusting camera image data
WO2023160220A1 (zh) 一种图像处理方法和电子设备
CN115526786B (zh) 图像处理方法及其相关设备
WO2023236225A1 (zh) 终端、终端控制方法及其装置、图像处理方法及其装置
WO2024067071A1 (zh) 一种拍摄方法、电子设备及介质
WO2023225825A1 (zh) 位置差异图生成方法及装置、电子设备、芯片及介质
WO2023236215A1 (zh) 图像处理方法、装置及存储介质

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2020562134

Country of ref document: JP

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2020824074

Country of ref document: EP

Effective date: 20201222

NENP Non-entry into the national phase

Ref country code: DE