WO2021093635A1 - Image processing method and apparatus, electronic device, and computer readable storage medium - Google Patents

Image processing method and apparatus, electronic device, and computer readable storage medium Download PDF

Info

Publication number
WO2021093635A1
WO2021093635A1 PCT/CN2020/126122 CN2020126122W WO2021093635A1 WO 2021093635 A1 WO2021093635 A1 WO 2021093635A1 CN 2020126122 W CN2020126122 W CN 2020126122W WO 2021093635 A1 WO2021093635 A1 WO 2021093635A1
Authority
WO
WIPO (PCT)
Prior art keywords
phase difference
image
target
sub
brightness map
Prior art date
Application number
PCT/CN2020/126122
Other languages
French (fr)
Chinese (zh)
Inventor
贾玉虎
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Publication of WO2021093635A1 publication Critical patent/WO2021093635A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/698Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio

Definitions

  • This application relates to the field of image processing technology, and in particular to an image processing method and device, electronic equipment, and computer-readable storage media.
  • the current focusing method is to focus within a rectangular frame.
  • the rectangular frame will include the foreground and background, and the focus can only be focused to a certain position to achieve quasi-focus.
  • the background is out of focus; when focusing on the background, the foreground is out of focus.
  • Traditional image processing methods have the problem of low image clarity.
  • an image processing method, apparatus, electronic device, and computer-readable storage medium are provided.
  • An image processing method applied to electronic equipment including:
  • Synthesis is performed according to the images corresponding to the phase difference of each target to obtain a fully in-focus image.
  • An image processing device characterized in that it comprises:
  • Preview image acquisition module for acquiring preview images
  • a dividing module configured to divide the preview image into at least two sub-areas
  • a phase difference acquiring module configured to acquire the phase difference corresponding to each of the at least two sub-regions
  • the phase difference acquisition module is further configured to determine at least two target phase differences from the phase difference corresponding to each sub-region, and the at least two target phase differences include a target foreground phase difference and a target background phase difference;
  • a focusing module configured to perform focusing according to the phase difference of each target to obtain an image corresponding to the phase difference of each target
  • the synthesis module is used for synthesizing the image corresponding to each target phase difference to obtain a fully in-focus image.
  • An electronic device includes a memory and a processor, and a computer program is stored in the memory.
  • the processor executes the following steps:
  • Synthesis is performed according to the images corresponding to the phase difference of each target to obtain a fully in-focus image.
  • a computer-readable storage medium having a computer program stored thereon, and when the computer program is executed by a processor, the following steps are implemented:
  • Synthesis is performed according to the images corresponding to the phase difference of each target to obtain a fully in-focus image.
  • the above-mentioned image processing method and device, electronic equipment, computer-readable storage medium obtaining a preview image, dividing the preview image into at least two sub-areas, obtaining the phase difference corresponding to each of the at least two sub-areas, and corresponding to each sub-areas
  • At least two target phase differences are determined in the phase difference, and the at least two target phase differences include the target foreground phase difference and the target background phase difference. Focus is performed according to each target phase difference to obtain an image corresponding to each target phase difference.
  • Acquire at least two images in different focal points, one of which is the background in-focus image and the other is the foreground in-focus image, and synthesize the images corresponding to the phase difference of each target to obtain a fully in-focus image, which can be out of focus
  • the image with less area improves the sharpness of the image.
  • Fig. 1 is an application environment diagram of an image processing method in an embodiment.
  • Fig. 2 is a flowchart of an image processing method in an embodiment.
  • Fig. 3 is a schematic diagram of the principle of phase focusing in an embodiment.
  • FIG. 4 is a schematic diagram of phase detection pixel points arranged in pairs in the pixel points included in the image sensor in an embodiment.
  • Fig. 5 is a schematic diagram of a part of the structure of an electronic device in an embodiment.
  • FIG. 6 is a schematic structural diagram of a part of the image sensor 504 in an embodiment.
  • FIG. 7 is a schematic diagram of the structure of pixels in an embodiment.
  • FIG. 8 is a schematic diagram of the internal structure of an image sensor in an embodiment.
  • FIG. 9 is a schematic diagram of the pixel point group Z in an embodiment.
  • FIG. 10 is a schematic diagram of a process of obtaining the phase difference corresponding to each sub-region in an embodiment.
  • FIG. 11 is a schematic diagram of performing segmentation processing on the target brightness map in the first direction in an embodiment.
  • FIG. 12 is a schematic diagram of performing segmentation processing on the target brightness map in the second direction in an embodiment.
  • Fig. 13 is a schematic flow chart of synthesizing to obtain a full in-focus image in an embodiment.
  • Fig. 14 is a schematic flow chart of synthesizing to obtain a full in-focus image in another embodiment.
  • Fig. 15 is a schematic flow chart of synthesizing a full in-focus image in another embodiment.
  • Fig. 16 is a structural block diagram of an image processing apparatus according to an embodiment.
  • Fig. 17 is a schematic diagram of the internal structure of an electronic device in an embodiment.
  • first and second used in this application can be used herein to describe various elements, but these elements are not limited by these terms. These terms are only used to distinguish the first data from another.
  • the first average phase difference may be referred to as the second average phase difference, and similarly, the second average phase difference may be referred to as the first average phase difference.
  • the first mean value of phase difference and the second mean value of phase difference are both mean value of phase difference, but they are not the same mean value of phase difference.
  • the first image feature can be referred to as the second image feature, and similarly, the second image feature can be referred to as the first image feature. Both the first image feature and the second image feature are image features, but they are not the same image feature.
  • the embodiment of the present application provides an electronic device.
  • the electronic device can be any terminal device including mobile phone, tablet computer, PDA (Personal Digital Assistant), POS (Point of Sales), on-board computer, wearable device, etc. Take the electronic device as a mobile phone as an example .
  • the above-mentioned electronic equipment includes an image processing circuit, which can be implemented by hardware and/or software components, and can include various processing units that define an ISP (Image Signal Processing, image signal processing) pipeline.
  • Fig. 1 is a schematic diagram of an image processing circuit in an embodiment. As shown in FIG. 1, for ease of description, only various aspects of the image processing technology related to the embodiments of the present application are shown.
  • the image processing circuit includes an ISP processor 140 and a control logic 150.
  • the image data captured by the imaging device 110 is first processed by the ISP processor 140, and the ISP processor 140 analyzes the image data to capture image statistics that can be used to determine and/or one or more control parameters of the imaging device 110.
  • the imaging device 110 may include a camera having one or more lenses 112 and an image sensor 114.
  • the image sensor 114 may include a color filter array (such as a Bayer filter). The image sensor 114 may obtain the light intensity and wavelength information captured by each imaging pixel of the image sensor 114, and provide a set of raw materials that can be processed by the ISP processor 140. Image data.
  • the attitude sensor 120 (such as a three-axis gyroscope, a Hall sensor, and an accelerometer) can provide the collected image processing parameters (such as anti-shake parameters) to the ISP processor 140 based on the interface type of the attitude sensor 120.
  • the interface of the attitude sensor 120 may use an SMIA (Standard Mobile Imaging Architecture) interface, other serial or parallel camera interfaces, or a combination of the foregoing interfaces.
  • the image sensor 114 may also send the original image data to the posture sensor 120.
  • the sensor 120 can provide the original image data to the ISP processor 140 based on the interface type of the posture sensor 120, or the posture sensor 120 can store the original image data in the image memory 130. in.
  • the ISP processor 140 processes the original image data pixel by pixel in a variety of formats.
  • each image pixel may have a bit depth of 8, 10, 12, or 14 bits, and the ISP processor 140 may perform one or more image processing operations on the original image data, and collect statistical information about the image data. Among them, the image processing operations can be performed with the same or different bit depth accuracy.
  • the ISP processor 140 may also receive image data from the image memory 130.
  • the posture sensor 120 interface sends the original image data to the image memory 130, and the original image data in the image memory 130 is then provided to the ISP processor 140 for processing.
  • the image memory 130 may be a part of a memory device, a storage device, or an independent dedicated memory in an electronic device, and may include DMA (Direct Memory Access) features.
  • the ISP processor 140 may perform one or more image processing operations, such as temporal filtering.
  • the processed image data can be sent to the image memory 130 for additional processing before being displayed.
  • the ISP processor 140 receives the processed data from the image memory 130, and performs image data processing in the original domain and in the RGB and YCbCr color spaces on the processed data.
  • the image data processed by the ISP processor 140 may be output to the display 160 for viewing by the user and/or further processed by a graphics engine or a GPU (Graphics Processing Unit, graphics processor).
  • the output of the ISP processor 140 can also be sent to the image memory 130, and the display 160 can read image data from the image memory 130.
  • the image memory 130 may be configured to implement one or more frame buffers.
  • the statistical data determined by the ISP processor 140 may be sent to the control logic 150 unit.
  • the statistical data may include image sensor 114 statistical information such as the vibration frequency of the gyroscope, automatic exposure, automatic white balance, automatic focus, flicker detection, black level compensation, and lens 112 shadow correction.
  • the control logic 150 may include a processor and/or a microcontroller that executes one or more routines (such as firmware). The one or more routines can determine the control parameters and ISP processing of the imaging device 110 based on the received statistical data. The control parameters of the device 140.
  • control parameters of the imaging device 110 may include attitude sensor 120 control parameters (such as gain, integration time of exposure control, anti-shake parameters, etc.), camera flash control parameters, camera anti-shake displacement parameters, lens 112 control parameters (such as focus or Zoom focal length), or a combination of these parameters.
  • the ISP control parameters may include gain levels and color correction matrices for automatic white balance and color adjustment (for example, during RGB processing), and lens 112 shading correction parameters.
  • the image sensor 114 in the imaging device may include a plurality of pixel point groups arranged in an array, wherein each pixel point group includes a plurality of pixel points arranged in an array, and each pixel point includes Multiple sub-pixels arranged in an array.
  • the first image is acquired through the lens 112 and the image sensor 114 in the imaging device (camera) 110, and the first image is sent to the ISP processor 140.
  • the ISP processor 140 can perform subject detection on the first image to obtain the region of interest in the first image, or obtain the region selected by the user as the region of interest, or obtain it in other ways
  • the area of interest is not limited to this.
  • the ISP processor 140 is configured to obtain a preview image, divide the preview image into at least two sub-areas, obtain a phase difference corresponding to each of the at least two sub-areas, and determine at least two target phase differences according to the phase difference corresponding to each sub-area ,
  • the at least two target phase differences include the target foreground phase difference and the target background phase difference, focus is performed according to each target phase difference, and an image corresponding to each target phase difference is obtained, and the image corresponding to each target phase difference is synthesized, Obtain a fully in-focus image.
  • the ISP processor 140 may send relevant information of the target sub-region, such as location information, contour information, etc., to the control logic 150.
  • control logic 150 controls the lens 112 in the imaging device (camera) to move, so as to focus on the position in the actual scene corresponding to the target area.
  • Fig. 2 is a flowchart of an image processing method in an embodiment. As shown in FIG. 2, an image processing method applied to an electronic device includes operation 202 to operation 212.
  • a preview image is obtained.
  • the number of cameras of the electronic device is not limited. For example, it may be one or two... and it is not limited to this.
  • the form of the camera installed in the electronic device is not limited. For example, it can be a camera built into the electronic device, or it can be an external camera of the electronic device. It can be a front camera or a rear camera.
  • the camera on the electronic device can be any type of camera.
  • the camera may be a color camera, a black-and-white camera, a depth camera, a telephoto camera, a wide-angle camera, etc., but is not limited thereto.
  • the preview image may be a visible light image.
  • the preview image refers to the image presented on the screen of the electronic device when the camera is not shooting.
  • the preview image can be the preview image of the current frame.
  • the electronic device obtains a preview image through a camera and displays it on the display screen.
  • the preview image is divided into at least two sub-areas.
  • the sub-region refers to an image region in the preview image.
  • the sub-region is a part of the image. That is, the sub-region includes a part of the pixels of the preview image.
  • the size and shape of each sub-region obtained by dividing the preview image may be the same or different, and one of them may be the same and the other may be different.
  • the specific division method is not limited.
  • the electronic device divides the preview image into at least two sub-areas.
  • the electronic device may divide the preview image into M ⁇ N sub-areas. Both N and M are positive integers, and the values of N and M may be the same or different.
  • the pixel of the preview image is 100 ⁇ 100, and it is divided into 4 sub-regions, then the pixels of each sub-region are 25 ⁇ 25.
  • Operation 206 Obtain a phase difference corresponding to each of the at least two sub-regions.
  • the phase difference refers to the difference in the position of the image formed by the imaging light entering the lens from different directions in the image sensor.
  • the electronic device includes an image sensor
  • the image sensor may include a plurality of pixel point groups arranged in an array, each pixel point group includes M*N pixel points arranged in an array; each pixel point corresponds to a photosensitive unit, Among them, both M and N are natural numbers greater than or equal to 2.
  • the phase difference corresponding to each sub-region may include the first phase difference and the second phase difference.
  • the first direction corresponding to the first phase difference and the second direction corresponding to the second phase difference form a preset angle.
  • the preset included angle may be any included angle other than 0 degrees, 180 degrees, and 360 degrees. That is, the phase difference corresponding to each sub-region can be two.
  • the electronic device acquires the credibility of the first phase difference and the credibility of the second phase difference; determines the credibility of the credibility of the second phase difference and the credibility of the credibility of the second phase difference;
  • the phase difference with higher reliability is used as the phase difference corresponding to the sub-region.
  • phase detection pixels in order to perform phase detection auto-focusing, usually some phase detection pixels can be arranged in pairs in the pixels included in the image sensor, which can also be called shielded pixels, where each phase detection pixel is paired One phase detection pixel point is occluded on the left side, and the other phase detection pixel point is occluded on the right side.
  • the imaging beam directed at each phase detection pixel point pair can be separated into two parts, left and right, by comparing the left and right parts.
  • the phase difference corresponding to each sub-window can be obtained from the image formed by the two imaging beams.
  • At least two target phase differences are determined from the phase differences corresponding to each sub-region, and the at least two target phase differences include the target foreground phase difference and the target background phase difference.
  • the foreground refers to the part with smaller depth in the image.
  • the foreground contains the subject.
  • the foreground is generally the object that the user wants to focus on.
  • the at least two target phase differences include the foreground phase difference and the background phase difference, and may also include other phase differences. For example, the phase difference between the foreground and the background.
  • the electronic device determines at least two target phase differences from the phase differences corresponding to each area, and the at least two target phase differences include at least the target foreground phase difference and the target background phase difference.
  • operation 210 focusing is performed according to the phase difference of each target, and an image corresponding to the phase difference of each target is obtained.
  • focusing refers to the process of changing the distance and distance of the object through the focusing mechanism of the electronic device to make the image of the object clear.
  • Focus can refer to auto focus.
  • Auto focus may refer to Phase Detection Auto Focus (PDAF) and other auto focus methods combined with phase focus.
  • Phase focusing is to obtain the phase difference through the sensor, calculate the defocus value according to the phase difference, and control the lens to move the corresponding distance according to the defocus value to achieve focus.
  • Phase focus can be combined with other focus methods, such as continuous auto focus, laser focus, etc.
  • the electronic device when receiving a photographing instruction, performs focusing according to each target phase difference of the at least two target phase differences, and obtains an image corresponding to each target phase difference.
  • the electronic device can calculate the defocus value corresponding to each target phase difference according to each target phase difference of at least two target phase differences, and control the lens to move the corresponding distance according to each defocus value to obtain the corresponding distance of each target phase difference.
  • image refers to the distance between the current position of the image sensor and the position where the image sensor should be in the in-focus state.
  • Each phase difference has a corresponding defocus value.
  • the defocus value corresponding to each phase difference can be the same or different.
  • the relationship between the phase difference and the defocus value can be obtained by pre-calibration.
  • the relationship between the phase difference and the defocus value may be a linear relationship or a nonlinear relationship.
  • the at least two target phase differences include target phase difference A, target phase difference B, and target phase difference C.
  • the target phase difference A is the target foreground phase difference
  • the target phase difference C is the target background phase difference
  • the target phase difference B is the phase difference between the target phase difference A and the target phase difference C.
  • the defocus value A is calculated according to the target phase difference A, and the lens is moved by the corresponding distance according to the defocus value A to obtain the image A corresponding to the target phase difference A.
  • the defocus value B is calculated according to the target phase difference B, and the lens is controlled to move the corresponding distance according to the defocus value B to obtain the image B corresponding to the target phase difference B.
  • the defocus value C is calculated according to the target phase difference C, and the lens is moved by the corresponding distance according to the defocus value C to obtain the image C corresponding to the target phase difference C. Then the electronic device gets image A, image B, and image C.
  • the processing order of the target phase difference is not limited.
  • the image corresponding to each target phase difference is synthesized to obtain a fully in-focus image.
  • the fully in-focus image refers to an image in which there is no out-of-focus area theoretically.
  • Image stitching refers to combining several images, which can be obtained by focusing at different positions, or images corresponding to different phase differences, to form a seamless panoramic image or high-resolution image.
  • the electronic device may stitch and synthesize the clear parts of the image corresponding to each target phase difference to obtain a fully in-focus image.
  • the electronic device may use the Laplacian pyramid method to perform the process according to the image corresponding to the phase difference of each target to obtain a fully in-focus image.
  • the electronic device inputs the image corresponding to the phase difference of each target into the convolutional neural network model for synthesis to obtain a fully in-focus image and the like is not limited to this.
  • a preview image is obtained, the preview image is divided into at least two sub-regions, the phase difference corresponding to each sub-region in the at least two sub-regions is obtained, and at least two are determined from the phase difference corresponding to each sub-region.
  • Target phase difference, at least two target phase differences include the target foreground phase difference and the target background phase difference.
  • Focusing is performed according to the phase difference of each target to obtain an image corresponding to the phase difference of each target, and at least two targets at different focal points can be acquired.
  • One is the background in-focus image, and the other is the foreground in-focus image.
  • the images corresponding to the phase difference of each target are synthesized to obtain a fully in-focus image, which can obtain an image with less out-of-focus area and improve the image The clarity.
  • FIG. 3 is a schematic diagram of the principle of phase focusing in an embodiment.
  • M1 is the position of the image sensor when the electronic device is in the in-focus state.
  • the in-focus state refers to the state of successful focusing.
  • Figure 3 When the image sensor is in the M1 position, the object W is reflected toward the lens Lens The imaging light g in different directions of the image converges on the image sensor, that is, the imaging light g in different directions reflected by the object W toward the lens Lens is imaged at the same position on the image sensor. At this time, the image of the image sensor is clear .
  • M2 and M3 are the possible positions of the image sensor when the electronic device is not in focus.
  • the image sensor when the image sensor is at the M2 position or the M3 position, the object W is reflected toward the lens Lens in different directions.
  • the imaging light g will be imaged at different positions.
  • the imaging light g reflected by the object W in different directions to the lens Lens is imaged at the position A and the position B respectively.
  • the image sensor is at the M3 position, the object W is reflected toward The imaging rays g in different directions of the lens Lens are respectively imaged at the position C and the position D. At this time, the image of the image sensor is not clear.
  • the difference in position of the image formed by the imaging light entering the lens from different directions in the image sensor can be obtained.
  • the difference between position A and position B can be obtained, or, Obtain the difference between position C and position D; after obtaining the difference in position of the image formed by the imaging light entering the lens from different directions in the image sensor, the difference and the difference between the lens and the image sensor in the camera
  • the geometric relationship is used to obtain the defocus value.
  • the so-called defocus value refers to the distance between the current position of the image sensor and the position where the image sensor should be in the in-focus state; the electronic device can focus according to the obtained defocus value.
  • the "difference in the position of the image formed by the imaging light entering the lens from different directions on the image sensor" can generally be referred to as a phase difference.
  • obtaining the phase difference is a very critical technical link.
  • the phase difference can be applied to a variety of different scenes, and the focus scene is only a relatively possible scene.
  • the phase difference can be applied to the acquisition scene of the depth map, that is, the phase difference can be used to acquire the depth map; for another example, the phase difference can be used in the reconstruction scene of the three-dimensional image, that is, you can use
  • the phase difference realizes the reconstruction of the three-dimensional image.
  • the embodiment of the present application aims to provide a method for obtaining the phase difference. As for the scene to which the phase difference is applied after the phase difference is obtained, the embodiment of the present application does not specifically limit it.
  • FIG. 4 is a schematic diagram of phase detection pixel points arranged in pairs in the pixel points included in the image sensor in an embodiment.
  • a phase detection pixel point pair (hereinafter referred to as a pixel point pair) A, a pixel point pair B, and a pixel point pair C may be provided in the image sensor.
  • a phase detection pixel point pair (hereinafter referred to as a pixel point pair) A, a pixel point pair B, and a pixel point pair C may be provided in the image sensor.
  • a phase detection pixel point pair (hereinafter referred to as a pixel point pair) A
  • a pixel point pair B a pixel point pair B
  • a pixel point pair C may be provided in the image sensor.
  • one phase detection pixel is subjected to left shielding (English: Left Shield)
  • the other phase detection pixel is subjected to right shielding (English: Right Shield).
  • the imaging beam can be divided into left and right parts, and the phase difference can be obtained by comparing the images formed by the left and right parts of the imaging beam.
  • the focusing method using the Figure 4 method is to obtain the phase difference through the sensor, calculate the defocus value according to the phase difference, control the lens movement according to the defocus value, and then find the focus value (FV) peak value.
  • phase difference corresponding to each of the at least two sub-regions is divided into a foreground phase difference set and a background phase difference set.
  • the foreground phase difference set includes at least one foreground phase difference.
  • the background phase difference set includes at least one background phase difference.
  • the phase difference threshold may be stored in the electronic device.
  • the phase difference greater than the phase difference threshold is divided into a background phase difference set, and the phase difference less than or equal to the phase difference threshold is divided into a foreground phase difference set.
  • the electronic device calculates the median phase difference according to the phase difference corresponding to each sub-region.
  • the phase difference greater than the median of the phase difference is divided into the background phase difference set, and the phase difference less than or equal to the median of the phase difference is divided into the foreground phase difference set.
  • Operation (a2) is to obtain the first mean value of the phase difference corresponding to the foreground phase difference set.
  • the electronic device obtains an average value according to the phase difference in the foreground phase difference set to obtain the first average value of the phase difference.
  • Operation (a3) is to obtain the second mean value of phase difference corresponding to the background phase difference set.
  • the electronic device obtains the average value according to the phase difference in the background phase difference set to obtain the second average value of the phase difference.
  • the first average value of the phase difference is used as the target foreground phase difference.
  • the electronic device uses the first average value of phase difference as the target foreground phase difference.
  • the corresponding first defocus value is calculated according to the first average value of phase difference, and the lens is controlled to move the corresponding distance according to the first defocus value to obtain The image corresponding to the first mean value of the phase difference.
  • the area corresponding to the first average value of phase difference is used as the focus area for focusing, and an image corresponding to the first average value of phase difference is obtained.
  • the second average phase difference is used as the target background phase difference.
  • the electronic device uses the second average value of the phase difference as the target foreground phase difference.
  • the corresponding second defocus value is calculated according to the second average value of phase difference, and the lens is controlled to move the corresponding distance according to the second defocus value to obtain The image corresponding to the second mean value of the phase difference.
  • the area corresponding to the second average value of phase difference is used as the focus area for focusing to obtain an image corresponding to the second average value of phase difference.
  • the phase difference corresponding to each of the at least two sub-regions is divided into a foreground phase difference set and a background background phase difference set, and the first phase difference average value corresponding to the foreground phase difference set is obtained, and
  • the second average value of phase difference corresponding to the background phase difference set of the background, the first average value of phase difference is taken as the target foreground phase difference, and the second average value of phase difference is taken as the target background phase difference
  • the foreground in-focus image and background in-focus image can be obtained according to the average value Image, improve the clarity of the image.
  • the image processing method further includes: excluding the maximum phase difference among the phase differences corresponding to the sub-regions to obtain a residual phase difference set; and dividing the phase difference corresponding to each sub-region in the at least two sub-regions into the foreground phase difference
  • the set and the background phase difference set include: dividing the remaining phase difference set into a foreground phase difference set and a background background phase difference set.
  • the area corresponding to the largest phase difference among the phase differences corresponding to the sub-regions is the area corresponding to the farthest scene in the preview image. Excluding the maximum phase difference among the phase differences corresponding to the sub-regions is to exclude the phase difference corresponding to the farthest scene in the preview image.
  • the phase difference threshold can be stored in the electronic device.
  • the phase difference that is greater than the phase difference threshold in the remaining phase difference set is divided into a background phase difference set, and the phase difference that is less than or equal to the phase difference threshold is divided into a foreground phase difference set.
  • the electronic device calculates the median phase difference according to the phase difference in the remaining phase difference set.
  • the phase difference greater than the median of the phase difference is divided into the background phase difference set, and the phase difference less than or equal to the median of the phase difference is divided into the foreground phase difference set.
  • the electronic device calculates the average value of the phase difference according to the phase difference in the remaining phase difference set.
  • the phase difference greater than the average phase difference is divided into a background phase difference set, and the phase difference less than or equal to the average phase difference is divided into a foreground phase difference set.
  • the largest phase difference in the phase difference corresponding to the sub-region is excluded to obtain the remaining phase difference set, which can eliminate the farthest background and reduce the remaining phase difference.
  • the set is divided into a foreground phase difference set and a background background phase difference set. Focusing based on the average value can improve the clarity of the image.
  • determining at least two target phase differences from the phase differences corresponding to each sub-region, and the at least two target phase differences include the foreground phase difference and the background phase difference, including: obtaining the phase difference of the at least two sub-regions The maximum phase difference and the minimum phase difference; the minimum phase difference is regarded as the foreground phase difference; the maximum phase difference is regarded as the background phase difference.
  • the area corresponding to the largest phase difference is the area corresponding to the most distant object.
  • the area corresponding to the smallest phase difference is the area corresponding to the nearest scene. There is usually a target subject in the area corresponding to the smallest phase difference.
  • the image processing method in this embodiment obtains the maximum phase difference and the minimum phase difference among the phase differences of at least two sub-regions, and uses the minimum phase difference as the foreground phase difference and the maximum phase difference as the background phase difference, so that only two The image is synthesized to improve image processing efficiency while improving image clarity.
  • the electronic device includes an image sensor, the image sensor includes a plurality of pixel point groups arranged in an array, and each pixel point group includes M*N pixel points arranged in an array; each pixel point corresponds to a photosensitive unit , Where both M and N are natural numbers greater than or equal to 2.
  • FIG. 5 is a schematic diagram of a part of the structure of an electronic device in an embodiment.
  • the electronic device may include a lens 502 and an image sensor 504, where the lens 502 may be composed of a series of lenses, and the image sensor 504 may be a metal oxide semiconductor device (English: Complementary Metal Oxide Semiconductor; abbreviation: CMOS) ) Image sensor, charge-coupled device (English: Charge-coupled Device; abbreviation: CCD), quantum thin film sensor or organic sensor, etc.
  • CMOS Complementary Metal Oxide Semiconductor
  • CCD Charge-coupled Device
  • quantum thin film sensor or organic sensor
  • the image sensor 504 may include a plurality of pixel point groups Z arranged in an array, wherein each pixel point group Z includes There are a plurality of pixel points D arranged in an array, and each pixel point includes a plurality of sub-pixel points d arranged in an array.
  • each pixel point group Z may include 4 pixels D arranged in an array arrangement of two rows and two columns, and each pixel point may include an array of two rows and two columns. 4 sub-pixel points d arranged in an arrangement manner.
  • the pixel points included in the image sensor 504 refer to a photosensitive unit, which may be composed of a plurality of photosensitive elements (that is, sub-pixels) arranged in an array, wherein the photosensitive element is a kind of An element that converts optical signals into electrical signals.
  • the photosensitive unit may further include a microlens and a filter, etc., wherein the microlens is disposed on the filter, the filter is disposed on each photosensitive element included in the photosensitive unit, and the filter may include red , Green, and blue, which can only transmit light of the corresponding wavelengths of red, green, and blue respectively.
  • FIG. 7 is a schematic diagram of the structure of pixels in an embodiment. As shown in Figure 7, taking each pixel point including sub-pixel point 1, sub-pixel point 2, sub-pixel point 3, and sub-pixel point 4 as an example, sub-pixel point 1 and sub-pixel point 2 can be combined, and sub-pixel point 3 and sub-pixel point Pixel 4 is synthesized to form a pair of PD pixels in the vertical direction to obtain the phase difference in the vertical direction, which can detect horizontal edges; sub-pixel point 1 and sub-pixel point 3 are synthesized, and sub-pixel point 2 and sub-pixel point 4 are synthesized to form a left-right direction.
  • the PD pixel pair obtains the phase difference in the horizontal direction and can detect the vertical edge.
  • FIG. 8 is a schematic diagram of the internal structure of an image sensor in an embodiment.
  • the imaging device includes a lens and an image sensor.
  • the image sensor includes a micro lens 80, a filter 82 and a photosensitive unit 84.
  • the micro lens 80, the filter 82 and the photosensitive unit 84 are sequentially located on the incident light path, that is, the micro lens 80 is disposed on the filter 82, and the filter 82 is disposed on the photosensitive unit 84.
  • the filter 82 may include three types of red, green, and blue, and can only transmit light of corresponding wavelengths of red, green, and blue, respectively.
  • One filter 82 is arranged on one pixel point.
  • the micro lens 80 is used to receive incident light and transmit the incident light to the filter 82. After the filter 82 smoothes the incident light, the smoothed light is incident on the photosensitive unit 84 on a pixel basis.
  • the photosensitive unit 84 in the image sensor converts the light incident from the filter 82 into a charge signal through the photoelectric effect, and generates a pixel signal consistent with the charge signal.
  • the charge signal is consistent with the received light intensity.
  • the pixels included in the image sensor and the pixels included in the image are two different concepts.
  • the pixels included in the image refer to the smallest component unit of the image, which is generally represented by a sequence of numbers.
  • the sequence of numbers can be referred to as the pixel value of a pixel.
  • the embodiments of the present application involve both concepts of "pixels included in an image sensor" and "pixels included in an image”. To facilitate readers' understanding, a brief explanation is provided here.
  • FIG. 9 shows a schematic diagram of an exemplary pixel point group Z.
  • the pixel point group Z includes 4 pixels arranged in an array arrangement of two rows and two columns. D, where the color channel of the pixels in the first row and the first column is green, that is, the filters included in the pixels in the first row and the first column are green filters, and the pixels in the first row and the second column are green filters.
  • the color channel of the pixel is red, that is, the filter included in the pixel in the first row and second column is a red filter, and the color channel of the pixel in the second row and first column is blue, that is, Yes, the pixel in the second row and first column includes a blue filter, and the color channel of the pixel in the second row and second column is green, that is, the pixel in the second row and second column
  • the filter included in the dot is a green filter.
  • the electronic device includes an image sensor, and the image sensor includes a plurality of pixel point groups arranged in an array, and each pixel point group includes a plurality of pixel points arranged in an array.
  • FIG. 10 it is a schematic diagram of the process of obtaining the phase difference corresponding to each sub-region in an embodiment, and obtaining the phase difference corresponding to each sub-region in at least two sub-regions includes:
  • a target brightness map is obtained according to the brightness values of the pixel points included in each pixel point group.
  • the brightness value of the pixel of the image sensor can be characterized by the brightness value of the sub-pixel included in the pixel.
  • the electronic device can obtain the target brightness map according to the brightness values of the sub-pixels in the pixel points included in each pixel point group.
  • the "brightness value of a sub-pixel” refers to the brightness value of the light signal received by the sub-pixel.
  • the sub-pixel included in the image sensor is a photosensitive element that can convert light signals into electrical signals. Therefore, the electronic device can obtain the intensity of the light signal received by the sub-pixel according to the electrical signal output by the sub-pixel, and obtain the brightness value of the sub-pixel according to the intensity of the light signal received by the sub-pixel.
  • the target brightness map in the embodiment of the present application is used to reflect the brightness value of the sub-pixels in the image sensor.
  • the target brightness map may include multiple pixels, wherein the pixel value of each pixel in the target brightness map is based on the image sensor Obtained from the brightness value of the neutron pixel.
  • segmentation processing is performed on the target brightness map, and the first segmented brightness map and the second segmented brightness map are obtained according to the results of the segmentation processing.
  • the electronic device may perform segmentation processing on the target brightness map along the direction of the column (the y-axis direction in the image coordinate system).
  • the first segmented brightness map and the second segmented brightness map obtained after the target brightness map is segmented along the column direction can be called the left image and the right image, respectively.
  • the electronic device may perform segmentation processing on the target brightness map along the row direction (the x-axis direction in the image coordinate system).
  • the first segmented brightness map and the second segmented brightness map obtained after the target brightness map is segmented in the direction of the row can be referred to as the upper image and the lower image, respectively.
  • the phase difference of the pixels that match each other is determined according to the position difference of the pixels that match each other in the first split brightness map and the second split brightness map.
  • the obtained first brightness segmentation map and the second brightness segmentation map are the upper and lower images.
  • the electronic device obtains the vertical phase difference according to the position difference of the matching pixels in the first brightness segmentation map and the second brightness segmentation map.
  • the obtained first brightness segmentation map and the second brightness segmentation map are the left and right images. Then, the electronic device obtains the horizontal phase difference according to the position difference of the matching pixels in the first brightness segmentation map and the second brightness segmentation map.
  • mutant pixels means that the pixel matrix composed of the pixel itself and the surrounding pixels are similar to each other.
  • the pixel a and its surrounding pixels in the first segmented brightness map form a pixel matrix with 3 rows and 3 columns, and the pixel value of the pixel matrix is:
  • the pixel b and the surrounding pixels in the second segmented brightness map also form a pixel matrix with 3 rows and 3 columns, and the pixel value of the pixel matrix is:
  • the two matrices are similar, and it can be considered that the pixel a and the pixel b match each other.
  • a common method is to calculate the difference of the pixel value of each corresponding pixel in the two pixel matrices, and then calculate the difference.
  • the absolute value of is added, and the result of the addition is used to determine whether the pixel matrix is similar, that is, if the result of the addition is less than a preset threshold, the pixel matrix is considered similar, otherwise, the pixel matrix is considered not similar.
  • the difference of 1 and 2 the difference of 10 and 10
  • the difference of 90 and 90 ...
  • the absolute difference Values are added, and the result of the addition is 3. If the result of the addition of 3 is less than the preset threshold, it is considered that the two pixel matrices with 3 rows and 3 columns are similar.
  • Another common method for judging whether the pixel matrix is similar is to use the Sobel convolution kernel calculation method or the high Laplacian calculation method to extract the edge characteristics, and judge whether the pixel matrix is similar by the edge characteristics.
  • the positional difference of pixels that match each other refers to the difference between the positions of the pixels in the first split brightness map and the positions of the pixels in the second split brightness map among the matched pixels.
  • the position difference between the pixel a and the pixel b that are matched with each other refers to the difference between the position of the pixel a in the first split brightness map and the position of the pixel b in the second split brightness map.
  • the pixels that match each other correspond to different images in the image sensor formed by the imaging light entering the lens from different directions.
  • the pixel a in the first split brightness map and the pixel b in the second split brightness map match each other, where the pixel a may correspond to the image formed at position A in FIG. 1, and the pixel b may correspond to The image formed at position B in Figure 1.
  • the phase difference of the matched pixels can be determined according to the position difference of the matched pixels. .
  • the phase difference corresponding to each of the at least two sub-regions is determined according to the phase difference of the pixels that are matched with each other.
  • the electronic device determines a phase difference corresponding to each of the at least two sub-regions according to the mutually matched phase differences.
  • the electronic device can obtain two phase differences corresponding to each sub-region according to the phase differences of the pixels that are matched with each other, which are the vertical phase difference and the horizontal phase difference, respectively.
  • the electronic device can obtain the vertical phase difference confidence level and the horizontal phase difference confidence level corresponding to each sub-region, determine the phase difference with the highest confidence level, and use the phase difference as a phase difference corresponding to each sub-region.
  • the image processing method in this embodiment obtains a target brightness map according to the brightness values of the pixel points included in each pixel point group in the image sensor. After the target brightness map is obtained, the target brightness map is segmented and processed according to As a result of the segmentation processing, the first segmented brightness map and the second segmented brightness map are obtained. Then, based on the position difference of the matching pixels in the first segmented brightness map and the second segmented brightness map, the pixels that match each other are determined Then, according to the phase difference of the matched pixels, the phase difference corresponding to each of the at least two sub-regions is determined. In this way, the brightness value of the pixel points included in each pixel point group in the image sensor can be used to determine the phase difference.
  • the phase difference of the matched pixels in the embodiment of the present application contains relatively rich phase difference information.
  • the acquired phase difference accuracy can be improved, so that when focusing, a high-precision phase difference corresponding to the focus area can be obtained, the focus peak can not be found, the focus efficiency is improved, and the overall in-focus image synthesis efficiency is improved.
  • segmentation processing is performed on the target brightness map, and the first segmented brightness map and the second segmented brightness map are obtained according to the results of the segmentation processing, including:
  • each brightness map region includes a row of pixels in the target brightness map, or each brightness map region includes a column of pixels in the target brightness map .
  • each luminance map area includes a row of pixels in the target luminance map.
  • each luminance map area includes a column of pixels in the target luminance map.
  • the electronic device may segment the target brightness map column by column along the row direction to obtain multiple pixel columns of the target brightness map.
  • the electronic device may segment the target brightness map row by row along the column direction to obtain multiple pixel rows of the target brightness map.
  • Operation (b2) obtain a plurality of first brightness map regions and a plurality of second brightness map regions from a plurality of brightness map regions, where the first brightness map region includes pixels in even rows of the target brightness map, or the first brightness map The area includes pixels in even-numbered columns in the target luminance map, and the second luminance map area includes pixels in odd-numbered rows in the target luminance map, or the second luminance map area includes pixels in odd-numbered columns in the target luminance map.
  • the first luminance map area includes pixels in even rows of the target luminance map.
  • the first luminance map area includes pixels in even-numbered columns in the target luminance map.
  • the second luminance map area includes pixels in odd rows in the target luminance map. Or, the second luminance map area includes pixels in odd-numbered columns in the target luminance map.
  • the electronic device may determine the even-numbered columns as the first brightness map area, and the odd-numbered columns as the second brightness map area.
  • the electronic device may determine even-numbered rows as the first brightness map area, and odd-numbered rows as the second brightness map area.
  • Operation (b3) is to use a plurality of first brightness map regions to form a first segmented brightness map, and use a plurality of second brightness map regions to form a second segmented brightness map.
  • FIG. 11 is a schematic diagram of performing segmentation processing on the target brightness map in the first direction in an embodiment.
  • FIG. 12 is a schematic diagram of performing segmentation processing on the target brightness map in the second direction in an embodiment.
  • the target brightness map includes 6 rows and 6 columns of pixels, when the target brightness map is segmented column by column, that is, the target brightness map is segmented in the first direction.
  • the electronic device can determine the pixels in the first column, the pixels in the third column, and the pixels in the fifth column of the target brightness map as the first brightness map area, and can set the pixels in the second column, the fourth column and the sixth column of the target brightness map. Determined as the second brightness map area.
  • the electronic device may splice the first brightness map area to obtain a first split brightness map T1, which includes the first column, the third column, and the fifth column of the target brightness map.
  • the electronic device may splice the second brightness map regions to obtain a second segmented brightness map T2, which includes the second column of pixels, the fourth column of pixels, and the sixth column of pixels of the target brightness map.
  • the target brightness map includes 6 rows and 6 columns of pixels
  • the target brightness map when the target brightness map is segmented row by row, that is, the target brightness map is segmented in the second direction.
  • the electronic device can determine the pixels in the first row, the pixels in the third row, and the pixels in the fifth row of the target brightness map as the first brightness map area, and can set the pixels in the second row, the fourth row and the sixth row of the target brightness map. Determined as the second brightness map area, and then, the electronic device can splice the first brightness map area to obtain the first sub-brightness map T3.
  • the first sub-brightness map T3 includes the pixels in the first row and the first row of the target brightness map. 3 rows of pixels and 5th row of pixels.
  • the electronic device may splice the second brightness map regions to obtain a second segmented brightness map T4, which includes the second row of pixels, the fourth row of pixels, and the sixth row of pixels of the target brightness map.
  • the image processing method in the embodiment of the present application does not need to block pixels to obtain the phase difference, and obtains relatively rich phase difference information by means of brightness segmentation, which improves the accuracy of the obtained phase difference.
  • the electronic device includes an image sensor, the image sensor includes a plurality of pixel point groups arranged in an array, each of the pixel point groups includes M*N pixel points arranged in an array; each pixel point corresponds to one In the photosensitive unit, both M and N are natural numbers greater than or equal to 2.
  • the phase difference corresponding to each sub-region includes a horizontal phase difference and a vertical phase difference. Obtain the phase difference corresponding to each sub-region in at least two sub-regions, including: when the sub-region is detected to contain horizontal lines, the vertical phase difference is regarded as the phase difference corresponding to the sub-region; when the sub-region does not contain horizontal lines , Regard the horizontal phase difference as the phase difference corresponding to the sub-region.
  • the lines may have problems such as smearing.
  • the vertical phase difference is used as the phase difference corresponding to the sub-region; when it is detected that the sub-region contains vertical lines, the horizontal phase difference is used as the phase difference corresponding to the sub-region, which can improve the phase Poor acquisition accuracy, thereby improving image clarity.
  • focusing according to the phase difference of each target to obtain an image corresponding to the phase difference of each target includes: taking the sub-area corresponding to the phase difference of each target as the focus area to obtain the image corresponding to the phase difference of each target .
  • the electronic device uses a sub-area corresponding to each target phase difference of the at least two target phase differences as a focus area, and obtains an image corresponding to each target phase difference.
  • at least two target phase differences include target phase difference A, target phase difference B, and target phase difference C.
  • the sub-area corresponding to the target phase difference A is used as the focus area, and focus is performed to obtain an image corresponding to the target phase difference A.
  • the sub-area corresponding to the target phase difference B is used as the focus area, and focus is performed to obtain an image corresponding to the target phase difference B.
  • the sub-region corresponding to the target phase difference C is used as the focus area, and focus is performed to obtain an image corresponding to the target phase difference C. That is, a total of three images are obtained.
  • the image processing method in this embodiment uses the sub-region corresponding to each target phase difference as a focus area to obtain an image corresponding to each target phase difference, and can obtain images with different focal points for synthesis, thereby improving image clarity.
  • synthesizing the images corresponding to each target phase difference to obtain a fully in-focus image includes: dividing the image corresponding to each target phase difference into the same number of sub-image areas; obtaining the corresponding sub-image area According to the corresponding definition of each sub-image area, determine the sub-image area with the highest definition among the matching sub-image areas; stitch and combine the sub-image areas with the highest definition to obtain a fully in-focus image.
  • the sub-image areas that match each other refer to sub-image areas located at the same position in different images.
  • the electronic device divides the image corresponding to each target phase difference into the same number of sub-image areas.
  • the electronic device obtains the definition corresponding to each sub-image area in the image corresponding to each target phase difference.
  • the electronic device determines the sub-image area with the highest definition among the matched sub-image areas according to the corresponding definition of each sub-image area of each sub-image.
  • the electronic device synthesizes all the sub-image areas with the highest definition to obtain a fully in-focus image.
  • the target phase difference A corresponds to image A, and image A is divided into sub-image area 1, sub-image area 2, sub-image area 3, and sub-image area 4.
  • the target phase difference B corresponds to image B, which is divided into sub-image area ⁇ , sub-image area ⁇ , sub-image area ⁇ , and sub-image area ⁇ .
  • the sub-image area 1 is located at the upper left corner of the image A
  • the sub-image area ⁇ is located at the upper left corner of the image B
  • the sub-image area 1 matches the sub-image area ⁇ ... and so on. If the sub-image area 1, sub-image area ⁇ , sub-image area ⁇ , and sub-image area 4 with the highest definition, the electronic device will splice and synthesize sub-image area 1, sub-image area ⁇ , sub-image area ⁇ , and sub-image area 4. Obtain a fully in-focus image.
  • the image corresponding to each target phase difference is divided into the same number of sub-image areas; the definition corresponding to each sub-image area is obtained; and the mutual definition is determined according to the corresponding definition of each sub-image area.
  • the sub-image area with the highest definition among the matched sub-image areas; the sub-image area with the highest definition is synthesized to obtain a full in-focus image, which can quickly obtain a full-in-focus image and improve image processing efficiency.
  • FIG. 13 it is a schematic flow chart of synthesizing to obtain a full in-focus image in an embodiment. Synthesize the images corresponding to the phase difference of each target to obtain a fully in-focus image, including:
  • Operation 1302 convolution and sampling processing of the image corresponding to each target phase difference, and when the preset iterative condition is met, a Gaussian pyramid of the image corresponding to each target phase difference is obtained.
  • the Gaussian pyramid is a kind of image pyramid. Except for the bottom layer image, the other layer images in the pyramid are all obtained by convolving and sampling the previous layer image. Gaussian pyramids can be used to obtain low-frequency images.
  • the low-frequency image can refer to the contour image in the image.
  • the iteration condition may mean reaching a preset number of times or reaching a preset time, etc., and is not limited thereto.
  • the image corresponding to each target phase difference has a corresponding Gaussian pyramid. For example, the image A corresponding to the target phase difference corresponds to the Gaussian pyramid A, and the image B corresponding to the target phase difference corresponds to the Gaussian pyramid B.
  • the electronic device uses a Gaussian kernel to convolve the image corresponding to each target phase difference, and sample the convolved image to obtain each layer of image. That is, the image (set as G0) corresponding to each target phase difference is convolved and sampled to obtain the upper low-frequency image (G1); then image G1 is convolved and sampled to obtain image G2...until the preset is satisfied
  • the iterative condition for example, iterates 5 times to obtain image G5, and obtains a Gaussian pyramid containing multiple low-frequency images corresponding to each target phase difference.
  • processing is performed according to each layer of the image in the Gaussian pyramid of the image corresponding to each target phase difference to obtain the Laplacian pyramid of the image corresponding to each target phase difference.
  • the Laplacian Pyramid (LP) is defined.
  • the image corresponding to each target phase difference has a corresponding Laplacian pyramid.
  • the image A corresponding to the target phase difference corresponds to the Laplacian Pyramid A
  • the image B corresponding to the target phase difference corresponds to the Laplacian Pyramid B.
  • Each layer of the Laplace Pyramid represents different scales and details. Among them, the details can be regarded as frequency.
  • the electronic device obtains the high-frequency image by subtracting the up-sampled low-frequency image from the original image.
  • L1, L2, L3, L4... and then the Laplacian pyramid of the image corresponding to each target phase difference can be obtained.
  • the Laplacian pyramid of the image corresponding to the phase difference of each target is fused to obtain a fused Laplacian pyramid.
  • the electronic device obtains the weight of the image corresponding to each target phase difference, and performs fusion according to the weight of the image corresponding to each target phase difference and the Laplacian pyramid of the image corresponding to each target phase difference, to obtain the fused image Pyramid of Laplace.
  • fusion formula is as follows:
  • L5 refers to the sixth layer of the fused Laplacian pyramid from the bottom to the top.
  • Weight refers to the weight of Figure 1.
  • Weight refers to the weight of Figure 2.
  • L5 refers to the sixth layer of the Laplacian Pyramid in Figure 1 from the bottom up.
  • L5 refers to the sixth layer of the Laplacian Pyramid in Figure 2 counted from the bottom up.
  • the weight of each image can be adjusted according to parameters such as depth of field and degree of blur. For example, regions with a low degree of ambiguity are highly weighted. Areas with a high degree of blur have a small weight. Areas with a small depth of field are more powerful. The area with a large depth of field has a small weight.
  • reconstruction processing is performed according to the fused Laplacian pyramid to obtain a fully in-focus image.
  • electronic devices merge from the top to the bottom.
  • G5 can be obtained by fuse of the G5 layer image of the Gaussian pyramid corresponding to each target phase difference.
  • L5 (fusion) is the L5 layer of the Laplace pyramid after fusion.
  • R5 (fusion) is the G5 layer of the Laplace Pyramid after reconstruction (Reconstruction).
  • L4 fusion
  • R0 the final synthesis result
  • the image convolution and sampling processing for each target phase difference is processed, and when the preset iterative conditions are met, the Gaussian pyramid of the image corresponding to each target phase difference is obtained, and the Gaussian pyramid is obtained according to each target phase difference.
  • Each layer of the image in the Gaussian pyramid of the image corresponding to the difference is processed to obtain the Laplacian pyramid of the image corresponding to each target phase difference, and the Laplacian pyramid of the image corresponding to each target phase difference is merged to obtain
  • the fused Laplacian Pyramid is reconstructed according to the fused Laplacian Pyramid to obtain a fully in-focus image.
  • the image can be synthesized according to the low-frequency contour and high-frequency details to make the boundary between each area more Naturally, improve the authenticity and clarity of the image.
  • FIG. 14 it is a schematic flow chart of synthesizing to obtain a full in-focus image in another embodiment. Synthesize the images corresponding to the phase difference of each target to obtain a fully in-focus image, including:
  • FIG. 15 is a schematic flowchart of synthesizing to obtain a full in-focus image in another embodiment.
  • the electronic device uses a convolutional neural network to perform convolution processing on the image corresponding to each target phase difference, and perform feature extraction. For example, in FIG. 15, convolution ⁇ feature extraction ⁇ convolution is performed on image 1. Perform convolution ⁇ feature extraction ⁇ convolution on image 2.
  • the features of the image corresponding to each target phase difference are fused to obtain the first image feature.
  • the electronic device fuses the features of the image corresponding to each target phase difference, and calculates the activation function to obtain the first image feature.
  • the image corresponding to each target phase difference is averaged to obtain an average image.
  • the electronic device performs averaging processing on the brightness value of the image corresponding to each target phase difference to obtain an average image.
  • Operation 1408 Perform feature extraction according to the average image and the first image feature to obtain the second image feature.
  • the electronic device performs feature extraction according to the average image and the first image feature to obtain the second image feature.
  • Operation 1410 Perform feature reconstruction according to the second image feature and the average image to obtain a fully in-focus image.
  • the electronic device performs feature reconstruction according to the second image feature and the average image to obtain a fully in-focus image.
  • the image processing method in the embodiment of the present application extracts the features of the image corresponding to each target phase difference, fuses the features of the image corresponding to each target phase difference, and obtains the first image feature.
  • For the image corresponding to each target phase difference Perform averaging processing to obtain an average image, perform feature extraction based on the average image and the first image feature, obtain the second image feature, perform feature reconstruction based on the second image feature and the average image, and obtain a fully in-focus image.
  • the neural network can be used. Synthesize images to improve the accuracy and clarity of image synthesis.
  • acquiring the preview image includes: acquiring the region of interest in the preview image; and dividing the preview image into at least two subregions includes: dividing the region of interest in the preview image into at least two subregions.
  • the region of interest refers to that in image processing, the area to be processed is outlined in the form of boxes, circles, ellipses, irregular polygons, etc., from the processed image.
  • the area of interest can include background and objects.
  • the electronic device receives the trigger instruction on the first preview image, and obtains the region of interest selected by the user according to the trigger instruction.
  • the electronic device divides the region of interest into at least two sub-regions.
  • the electronic device may divide the region of interest selected by the user into N ⁇ N sub-regions.
  • the electronic device may divide the region of interest selected by the user into N ⁇ M sub-regions, etc., which is not limited thereto. Both N and M are positive integers.
  • the image processing method in the embodiment of the present application obtains the region of interest in the preview image, divides the region of interest in the preview image into at least two sub-regions, and can focus according to the region of interest, thereby ensuring the scene in the region of interest Clear, improve the image clarity of the region of interest in the full focus image.
  • determining the at least two target phase differences according to the phase difference corresponding to each sub-region includes: acquiring a scene mode; and determining the at least two target phase differences according to the scene mode.
  • each scene mode can correspond to different types of target phase differences.
  • the scene mode may be a night scene mode, a panoramic mode, etc., and is not limited thereto.
  • the target phase difference corresponding to the A scene mode is a foreground phase difference and a background phase difference.
  • the target phase difference corresponding to the B scene mode is the foreground phase difference, the median phase difference, and the background phase difference, etc., which are not limited to this.
  • the image processing method in the embodiment of the application obtains the scene mode, determines at least two target phase differences according to the scene mode, can quickly determine the target phase difference according to different scene modes, achieves the effect corresponding to different scenes, and improves image processing efficiency and image effects The clarity.
  • Fig. 16 is a structural block diagram of an image processing apparatus according to an embodiment.
  • an image processing device includes a preview image acquisition module 1602, a division module 1604, a phase difference acquisition module 1606, a focus module 1608, and a synthesis module 1610, in which:
  • the preview image acquisition module 1602 is used to acquire a preview image.
  • the dividing module 1604 is configured to divide the preview image into at least two sub-areas.
  • the phase difference acquiring module 1606 is configured to acquire the phase difference corresponding to each of the at least two sub-regions.
  • the phase difference acquisition module 1606 is further configured to determine at least two target phase differences from the phase differences corresponding to each sub-region, and the at least two target phase differences include the target foreground phase difference and the target background phase difference.
  • the focusing module 1608 is used for focusing according to the phase difference of each target to obtain an image corresponding to the phase difference of each target.
  • the synthesizing module 1610 is used to synthesize the images corresponding to each target phase difference to obtain a fully in-focus image.
  • the image processing device in this embodiment obtains a preview image, divides the preview image into at least two sub-regions, obtains the phase difference corresponding to each sub-region in the at least two sub-regions, and determines at least two phase differences from the phase difference corresponding to each sub-region.
  • Target phase difference, at least two target phase differences include the target foreground phase difference and the target background phase difference.
  • Focusing is performed according to the phase difference of each target to obtain an image corresponding to the phase difference of each target, and at least two targets at different focal points can be acquired.
  • One is the background in-focus image
  • the other is the foreground in-focus image.
  • the images corresponding to the phase difference of each target are synthesized to obtain a fully in-focus image, which can obtain an image with less out-of-focus area and improve the image The clarity.
  • the phase difference obtaining module 1606 is configured to divide the phase difference corresponding to each of the at least two sub-regions into a foreground phase difference set and a background phase difference set; and obtain the first phase difference mean value corresponding to the foreground phase difference set ; Obtain the second average phase difference corresponding to the background phase difference set; use the first average phase difference as the target foreground phase difference; use the second average phase difference as the target background phase difference.
  • the image processing device in this embodiment divides the phase difference corresponding to each of the at least two sub-regions into a foreground phase difference set and a background background phase difference set, and obtains the first phase difference mean value corresponding to the foreground phase difference set, and obtains The second average value of phase difference corresponding to the background phase difference set of the background, the first average value of phase difference is taken as the target foreground phase difference, and the second average value of phase difference is taken as the target background phase difference, the foreground in-focus image and background in-focus image can be obtained according to the average value Image, improve the clarity of the image.
  • the phase difference acquisition module 1606 is used to exclude the maximum phase difference in the phase difference corresponding to the sub-region to obtain the remaining phase difference set; divide the remaining phase difference set into a foreground phase difference set and a background background phase difference set .
  • the largest phase difference in the phase difference corresponding to the sub-region is excluded to obtain the remaining phase difference set, which can eliminate the farthest background and reduce the remaining phase difference.
  • the set is divided into a foreground phase difference set and a background background phase difference set. Focusing based on the average value can improve the clarity of the image.
  • the phase difference obtaining module 1606 is configured to obtain the maximum phase difference and the minimum phase difference among the phase differences of at least two sub-regions; the minimum phase difference is regarded as the foreground phase difference; and the maximum phase difference is regarded as the background phase difference.
  • the image processing device in this embodiment obtains the maximum phase difference and the minimum phase difference among the phase differences of at least two sub-regions, and uses the minimum phase difference as the foreground phase difference and the maximum phase difference as the background phase difference, so that only two The image is synthesized to improve image processing efficiency while improving image clarity.
  • the electronic device includes an image sensor, the image sensor includes a plurality of pixel point groups arranged in an array, each of the pixel point groups includes M*N pixel points arranged in an array; each pixel point corresponds to one In the photosensitive unit, both M and N are natural numbers greater than or equal to 2.
  • the phase difference obtaining module 1606 is used to obtain the target brightness map according to the brightness values of the pixels included in each pixel point group; perform segmentation processing on the target brightness map, and obtain the first segment brightness map and the second segment brightness map according to the results of the segmentation processing.
  • Split brightness map determine the phase difference of the matching pixels according to the position difference of the matching pixels in the first split brightness map and the second split brightness map; determine at least two sub-regions according to the phase difference of the matching pixels The phase difference corresponding to each sub-area in.
  • the image processing device in this embodiment obtains a target brightness map according to the brightness values of the pixel points included in each pixel point group in the image sensor, and after obtaining the target brightness map, performs segmentation processing on the target brightness map, according to As a result of the segmentation processing, the first segmented brightness map and the second segmented brightness map are obtained. Then, based on the position difference of the matching pixels in the first segmented brightness map and the second segmented brightness map, the pixels that match each other are determined Then, according to the phase difference of the matched pixels, the phase difference corresponding to each of the at least two sub-regions is determined. In this way, the brightness value of the pixel points included in each pixel point group in the image sensor can be used to determine the phase difference.
  • the phase difference of the matched pixels in the embodiment of the present application contains relatively rich phase difference information. The accuracy of the phase difference obtained can be improved.
  • the phase difference acquisition module 1606 is used to perform segmentation processing on the target brightness map to obtain multiple brightness map regions, each brightness map region includes a row of pixels in the target brightness map, or each brightness map region Including a column of pixels in the target brightness map; obtaining multiple first brightness map regions and multiple second brightness map regions from multiple brightness map regions, the first brightness map region including pixels in even rows of the target brightness map, or,
  • the first luminance map area includes the pixels in the even-numbered columns of the target luminance map
  • the second luminance map area includes the pixels in the odd-numbered rows of the target luminance map, or the second luminance map area includes the pixels in the odd-numbered columns in the target luminance map
  • the first brightness map area composes a first segmented brightness map
  • multiple second brightness map areas are used to compose a second segmented brightness map.
  • the image processing device in the embodiment of the present application does not need to shield the pixels to obtain the phase difference, and obtains relatively rich phase difference information by means of brightness segmentation, which improves the accuracy of the obtained phase difference.
  • the phase difference acquisition module 1606 is configured to use the vertical phase difference as the phase difference corresponding to the sub-region when it is detected that the sub-region contains horizontal lines; when it is detected that the sub-region does not contain horizontal lines, the horizontal
  • the phase difference is regarded as the phase difference corresponding to the sub-region.
  • the lines may have problems such as smearing.
  • the vertical phase difference is used as the phase difference corresponding to the sub-region; when it is detected that the sub-region contains vertical lines, the horizontal phase difference is used as the phase difference corresponding to the sub-region, which can improve the phase Poor acquisition accuracy, thereby improving image clarity.
  • the focusing module 1608 is configured to use the sub-area corresponding to each target phase difference as a focus area to obtain an image corresponding to each target phase difference.
  • the image processing device in this embodiment uses the sub-area corresponding to each target phase difference as a focus area to obtain an image corresponding to each target phase difference, and can obtain images with different focal points for synthesis, thereby improving image clarity.
  • the synthesis module 1610 is configured to divide the image corresponding to each target phase difference into the same number of sub-image areas; obtain the definition corresponding to each sub-image area; determine according to the definition corresponding to each sub-image area The sub-image area with the highest definition among the matching sub-image areas; the sub-image area with the highest definition is synthesized to obtain a fully in-focus image.
  • the image processing device in the embodiment of the present application divides the image corresponding to each target phase difference into the same number of sub-image areas; obtains the definition corresponding to each sub-image area; and determines the mutual resolution according to the corresponding definition of each sub-image area.
  • the sub-image area with the highest definition among the matched sub-image areas; the sub-image area with the highest definition is synthesized to obtain a full in-focus image, which can quickly obtain a full-in-focus image and improve image processing efficiency.
  • the synthesis module 1610 is used for convolution and sampling processing of the image corresponding to each target phase difference, and when the preset iterative conditions are met, the Gaussian pyramid of the image corresponding to each target phase difference is obtained; Each layer of the Gaussian pyramid of the image corresponding to the target phase difference is processed to obtain the Laplacian pyramid of the image corresponding to each target phase difference; the Laplacian pyramid of the image corresponding to each target phase difference is fused , The fused Laplacian pyramid is obtained; reconstruction processing is performed according to the fused Laplacian pyramid to obtain a fully in-focus image.
  • the image processing device in this embodiment performs convolution and sampling processing on the image corresponding to each target phase difference, and when the preset iterative conditions are met, the Gaussian pyramid of the image corresponding to each target phase difference is obtained, and the Gaussian pyramid is obtained according to each target phase difference.
  • Each layer of the image in the Gaussian pyramid of the image corresponding to the difference is processed to obtain the Laplacian pyramid of the image corresponding to each target phase difference, and the Laplacian pyramid of the image corresponding to each target phase difference is merged to obtain
  • the fused Laplacian Pyramid is reconstructed according to the fused Laplacian Pyramid to obtain a fully in-focus image, which can make the boundary between various regions more natural and improve the authenticity and clarity of the image.
  • the synthesis module 1610 is used to extract the features of the image corresponding to each target phase difference; fuse the features of the image corresponding to each target phase difference to obtain the first image feature; The image is averaged to obtain an average image; feature extraction is performed based on the average image and the first image feature to obtain the second image feature; the second image feature is reconstructed based on the second image feature and the average image to obtain a fully in-focus image.
  • the image processing device in the embodiment of the present application extracts the features of the image corresponding to each target phase difference, fuses the features of the image corresponding to each target phase difference, and obtains the first image feature.
  • For the image corresponding to each target phase difference Perform averaging processing to obtain an average image, perform feature extraction based on the average image and the first image feature, obtain the second image feature, perform feature reconstruction based on the second image feature and the average image, and obtain a fully in-focus image.
  • the neural network can be used Synthesize images to improve the accuracy and clarity of image synthesis.
  • the preview image acquisition module 1602 is used to acquire the region of interest in the preview image.
  • the dividing module 1604 is configured to divide the region of interest in the preview image into at least two sub-regions.
  • the image processing device in the application embodiment obtains the region of interest in the preview image, divides the region of interest in the preview image into at least two sub-regions, and can focus according to the region of interest, thereby ensuring that the scene in the region of interest is clear , To improve the image clarity of the region of interest in the all-in-focus image.
  • the phase difference obtaining module 1606 is used to obtain a scene mode; and determine at least two target phase differences according to the scene mode.
  • the image processing device in the embodiment of the present application obtains the scene mode, determines at least two target phase differences according to the scene mode, can quickly determine the target phase difference according to different scene modes, achieves the effect corresponding to different scenes, and improves image processing efficiency and image effects The clarity.
  • the division of the modules in the above-mentioned image processing apparatus is only for illustration. In other embodiments, the image processing apparatus may be divided into different modules as required to complete all or part of the functions of the above-mentioned image processing apparatus.
  • Each module in the above-mentioned image processing device may be implemented in whole or in part by software, hardware, and a combination thereof.
  • the above-mentioned modules may be embedded in the form of hardware or independent of the processor in the computer equipment, or may be stored in the memory of the computer equipment in the form of software, so that the processor can call and execute the operations corresponding to the above-mentioned modules.
  • Fig. 17 is a schematic diagram of the internal structure of an electronic device in an embodiment.
  • the electronic device includes a processor and a memory connected through a system bus.
  • the processor is used to provide computing and control capabilities to support the operation of the entire electronic device.
  • the memory may include a non-volatile storage medium and internal memory.
  • the non-volatile storage medium stores an operating system and a computer program.
  • the computer program can be executed by the processor to implement an image processing method provided in the following embodiments.
  • the internal memory provides a cached operating environment for the operating system computer program in the non-volatile storage medium.
  • the electronic device can be a mobile phone, a tablet computer, or a personal digital assistant or a wearable device.
  • each module in the image processing apparatus provided in the embodiment of the present application may be in the form of a computer program.
  • the computer program can be run on a terminal or a server.
  • the program module composed of the computer program can be stored in the memory of the terminal or the server.
  • the embodiment of the present application also provides a computer-readable storage medium.
  • a computer program product containing instructions that, when run on a computer, causes the computer to execute an image processing method.
  • Non-volatile memory may include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory.
  • Volatile memory may include random access memory (RAM), which acts as external cache memory.
  • RAM is available in many forms, such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), enhanced SDRAM (ESDRAM), synchronous Link (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
  • SRAM static RAM
  • DRAM dynamic RAM
  • SDRAM synchronous DRAM
  • DDR SDRAM double data rate SDRAM
  • ESDRAM enhanced SDRAM
  • SLDRAM synchronous Link (Synchlink) DRAM
  • Rambus direct RAM
  • DRAM direct memory bus dynamic RAM
  • RDRAM memory bus dynamic RAM

Abstract

An image processing method, comprising: acquiring a preview image; dividing the preview image into at least two sub-regions; acquiring a phase difference corresponding to each sub-region of the at least two sub-regions; determining at least two target phase differences from phase differences corresponding to each sub-region, wherein the at least two target phase differences include a target foreground phase difference and a target background phase difference; focusing according to each target phase difference to obtain an image corresponding to each target phase difference; and combining the images corresponding to each target phase difference to obtain a full-quasi-focus image.

Description

图像处理方法和装置、电子设备、计算机可读存储介质Image processing method and device, electronic equipment, and computer readable storage medium
相关申请的交叉引用Cross-references to related applications
本申请要求于2019年11月12日提交中国专利局、申请号为201911101432.0、发明名称为“图像处理方法和装置、电子设备、计算机可读存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims the priority of a Chinese patent application filed with the Chinese Patent Office on November 12, 2019, the application number is 201911101432.0, and the invention title is "Image processing methods and devices, electronic equipment, computer-readable storage media", and its entire contents Incorporated in this application by reference.
技术领域Technical field
本申请涉及图像处理技术领域,特别是涉及一种图像处理方法和装置、电子设备、计算机可读存储介质。This application relates to the field of image processing technology, and in particular to an image processing method and device, electronic equipment, and computer-readable storage media.
背景技术Background technique
目前的对焦方式是在一个矩形框范围内进行对焦。而矩形框范围内会包括前景和背景,而对焦只能对焦到某个位置从而达到准焦。当对焦在前景时,背景失焦;当对焦在背景时,前景失焦。传统的图像处理方式存在图像清晰度不高的问题。The current focusing method is to focus within a rectangular frame. The rectangular frame will include the foreground and background, and the focus can only be focused to a certain position to achieve quasi-focus. When focusing on the foreground, the background is out of focus; when focusing on the background, the foreground is out of focus. Traditional image processing methods have the problem of low image clarity.
发明内容Summary of the invention
根据本申请的各种实施例提供一种图像处理方法、装置、电子设备、计算机可读存储介质。According to various embodiments of the present application, an image processing method, apparatus, electronic device, and computer-readable storage medium are provided.
一种图像处理方法,应用于电子设备,包括:An image processing method applied to electronic equipment, including:
获取预览图像;Obtain a preview image;
将所述预览图像划分为至少两个子区域;Dividing the preview image into at least two sub-areas;
获取所述至少两个子区域中每个子区域对应的相位差;Acquiring a phase difference corresponding to each of the at least two sub-regions;
从所述每个子区域对应的相位差中确定至少两个目标相位差,所述至少两个目标相位差中包括目标前景相位差和目标背景相位差;Determining at least two target phase differences from the phase differences corresponding to each sub-region, where the at least two target phase differences include a target foreground phase difference and a target background phase difference;
根据每个目标相位差进行对焦,得到所述每个目标相位差对应的图像;Performing focusing according to the phase difference of each target to obtain an image corresponding to the phase difference of each target;
根据所述每个目标相位差对应的图像进行合成,得到全准焦图像。Synthesis is performed according to the images corresponding to the phase difference of each target to obtain a fully in-focus image.
一种图像处理装置,其特征在于,包括:An image processing device, characterized in that it comprises:
预览图像获取模块,用于获取预览图像;Preview image acquisition module for acquiring preview images;
划分模块,用于将所述预览图像划分为至少两个子区域;A dividing module, configured to divide the preview image into at least two sub-areas;
相位差获取模块,用于获取所述至少两个子区域中每个子区域对应的相位差;A phase difference acquiring module, configured to acquire the phase difference corresponding to each of the at least two sub-regions;
所述相位差获取模块,还用于从所述每个子区域对应的相位差中确定至少两个目标相位差,所述至少两个目标相位差中包括目标前景相位差和目标背景相位差;The phase difference acquisition module is further configured to determine at least two target phase differences from the phase difference corresponding to each sub-region, and the at least two target phase differences include a target foreground phase difference and a target background phase difference;
对焦模块,用于根据每个目标相位差进行对焦,得到所述每个目标相位差对应的图像;A focusing module, configured to perform focusing according to the phase difference of each target to obtain an image corresponding to the phase difference of each target;
合成模块,用于根据所述每个目标相位差对应的图像进行合成,得到全准焦图像。The synthesis module is used for synthesizing the image corresponding to each target phase difference to obtain a fully in-focus image.
一种电子设备,包括存储器及处理器,所述存储器中储存有计算机程序,所述计算机程序被所述处理器执行时,使得所述处理器执行如下步骤:An electronic device includes a memory and a processor, and a computer program is stored in the memory. When the computer program is executed by the processor, the processor executes the following steps:
获取预览图像;Obtain a preview image;
将所述预览图像划分为至少两个子区域;Dividing the preview image into at least two sub-areas;
获取所述至少两个子区域中每个子区域对应的相位差;Acquiring a phase difference corresponding to each of the at least two sub-regions;
从所述每个子区域对应的相位差中确定至少两个目标相位差,所述至少两个目标相位差中包括目标前景相位差和目标背景相位差;Determining at least two target phase differences from the phase differences corresponding to each sub-region, where the at least two target phase differences include a target foreground phase difference and a target background phase difference;
根据每个目标相位差进行对焦,得到所述每个目标相位差对应的图像;Performing focusing according to the phase difference of each target to obtain an image corresponding to the phase difference of each target;
根据所述每个目标相位差对应的图像进行合成,得到全准焦图像。Synthesis is performed according to the images corresponding to the phase difference of each target to obtain a fully in-focus image.
一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现如下步骤:A computer-readable storage medium having a computer program stored thereon, and when the computer program is executed by a processor, the following steps are implemented:
获取预览图像;Obtain a preview image;
将所述预览图像划分为至少两个子区域;Dividing the preview image into at least two sub-areas;
获取所述至少两个子区域中每个子区域对应的相位差;Acquiring a phase difference corresponding to each of the at least two sub-regions;
从所述每个子区域对应的相位差中确定至少两个目标相位差,所述至少两个目标相位差中包括目标前景相位差和目标背景相位差;Determining at least two target phase differences from the phase differences corresponding to each sub-region, where the at least two target phase differences include a target foreground phase difference and a target background phase difference;
根据每个目标相位差进行对焦,得到所述每个目标相位差对应的图像;Performing focusing according to the phase difference of each target to obtain an image corresponding to the phase difference of each target;
根据所述每个目标相位差对应的图像进行合成,得到全准焦图像。Synthesis is performed according to the images corresponding to the phase difference of each target to obtain a fully in-focus image.
上述图像处理方法和装置、电子设备、计算机可读存储介质,、获取预览图像,将预览图像划分为至少两个子区域,获取至少两个子区域中每个子区域对应的相位差,从每个子区域对应的相位差中确定至少两个目标相位差,至少两个目标相位差中包括目标前景相位差和目标背景相位差,根据每个目标相位差进行对焦,得到每个目标相位差对应的图像,能够获取至少两张处于不同焦点下的图像,其中一张是背景准焦图像,一张是前景准焦图像,根据每个目标相位差对应的图像进行合成,得到全准焦图像,能够得到失焦区域较少的图像,提高图像的清晰度。The above-mentioned image processing method and device, electronic equipment, computer-readable storage medium, obtaining a preview image, dividing the preview image into at least two sub-areas, obtaining the phase difference corresponding to each of the at least two sub-areas, and corresponding to each sub-areas At least two target phase differences are determined in the phase difference, and the at least two target phase differences include the target foreground phase difference and the target background phase difference. Focus is performed according to each target phase difference to obtain an image corresponding to each target phase difference. Acquire at least two images in different focal points, one of which is the background in-focus image and the other is the foreground in-focus image, and synthesize the images corresponding to the phase difference of each target to obtain a fully in-focus image, which can be out of focus The image with less area improves the sharpness of the image.
附图说明Description of the drawings
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to more clearly describe the technical solutions in the embodiments of the present application or the prior art, the following will briefly introduce the drawings that need to be used in the description of the embodiments or the prior art. Obviously, the drawings in the following description are only These are some embodiments of the present application. For those of ordinary skill in the art, other drawings can be obtained based on these drawings without creative work.
图1为一个实施例中图像处理方法的应用环境图。Fig. 1 is an application environment diagram of an image processing method in an embodiment.
图2为一个实施例中图像处理方法的流程图。Fig. 2 is a flowchart of an image processing method in an embodiment.
图3为一个实施例中相位对焦的原理示意图。Fig. 3 is a schematic diagram of the principle of phase focusing in an embodiment.
图4为一个实施例中在图像传感器包括的像素点中成对地设置相位检测像素点的示意图。FIG. 4 is a schematic diagram of phase detection pixel points arranged in pairs in the pixel points included in the image sensor in an embodiment.
图5为一个实施例中电子设备的部分结构示意图。Fig. 5 is a schematic diagram of a part of the structure of an electronic device in an embodiment.
图6为一个实施例中图像传感器504的一部分的结构示意图。FIG. 6 is a schematic structural diagram of a part of the image sensor 504 in an embodiment.
图7为一个实施例中像素点的结构示意图。FIG. 7 is a schematic diagram of the structure of pixels in an embodiment.
图8为一个实施例中图像传感器的内部结构示意图。FIG. 8 is a schematic diagram of the internal structure of an image sensor in an embodiment.
图9为一个实施例中像素点组Z的示意图。FIG. 9 is a schematic diagram of the pixel point group Z in an embodiment.
图10为一个实施例中获取每个子区域对应的相位差的流程示意图。FIG. 10 is a schematic diagram of a process of obtaining the phase difference corresponding to each sub-region in an embodiment.
图11为一个实施例中以第一方向对目标亮度图进行切分处理的示意图。FIG. 11 is a schematic diagram of performing segmentation processing on the target brightness map in the first direction in an embodiment.
图12为一个实施例中以第二方向对目标亮度图进行切分处理的示意图。FIG. 12 is a schematic diagram of performing segmentation processing on the target brightness map in the second direction in an embodiment.
图13为一个实施例中合成得到全准焦图的流程示意图。Fig. 13 is a schematic flow chart of synthesizing to obtain a full in-focus image in an embodiment.
图14为另一个实施例中合成得到全准焦图的流程示意图。Fig. 14 is a schematic flow chart of synthesizing to obtain a full in-focus image in another embodiment.
图15为又一个实施例中合成得到全准焦图的流程示意图。Fig. 15 is a schematic flow chart of synthesizing a full in-focus image in another embodiment.
图16为一个实施例的图像处理装置的结构框图。Fig. 16 is a structural block diagram of an image processing apparatus according to an embodiment.
图17为一个实施例中电子设备的内部结构示意图。Fig. 17 is a schematic diagram of the internal structure of an electronic device in an embodiment.
具体实施方式Detailed ways
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。In order to make the purpose, technical solutions, and advantages of this application clearer, the following further describes this application in detail with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described here are only used to explain the application, and not used to limit the application.
可以理解,本申请所使用的术语“第一”、“第二”等可在本文中用于描述各种元件,但这些元件不受这些术语限制。这些术语仅用于将第一个数据与另一个数据区分。举例来说,在不脱离本申请的范围的情况下,可以将第一相位差均值称为第二相位差均值,且类似地,可将第二相位差均值称为第一相位差均值。第一相位差均值和第二相位差均值两者都是相位差均值,但其不是同一相位差均值。可将第一图像特征称为第二图像特征,类似地,可将第二图像特征称为第一图像特征。第一图像特征和第二图像特征两者都是图像特征,但其不是同一图像特征。It can be understood that the terms "first", "second", etc. used in this application can be used herein to describe various elements, but these elements are not limited by these terms. These terms are only used to distinguish the first data from another. For example, without departing from the scope of the present application, the first average phase difference may be referred to as the second average phase difference, and similarly, the second average phase difference may be referred to as the first average phase difference. The first mean value of phase difference and the second mean value of phase difference are both mean value of phase difference, but they are not the same mean value of phase difference. The first image feature can be referred to as the second image feature, and similarly, the second image feature can be referred to as the first image feature. Both the first image feature and the second image feature are image features, but they are not the same image feature.
本申请实施例提供一种电子设备。为了便于说明,仅示出了与本申请实施例相关的部分,具体技术细节未揭示的,请参照本申请实施例方法部分。该电子设备可以为包括手机、平板电脑、PDA(Personal Digital Assistant,个人数字助理)、POS(Point of Sales,销售终端)、车载电脑、穿戴式设备等任意终端设备,以电子设备为手机为例。上述电子设备中包括图像处理电路,图像处理电路可以利用硬件和/或软件组件实现,可包括定义ISP(Image Signal Processing,图像信号处理)管线的各种处理单元。图1为一个实施例中图像处理电路的示意图。如图1所示,为便于说明,仅示出与本申请实施例相关的图像处理技术的各个方面。The embodiment of the present application provides an electronic device. For ease of description, only the parts related to the embodiments of the present application are shown. For specific technical details that are not disclosed, please refer to the method part of the embodiments of the present application. The electronic device can be any terminal device including mobile phone, tablet computer, PDA (Personal Digital Assistant), POS (Point of Sales), on-board computer, wearable device, etc. Take the electronic device as a mobile phone as an example . The above-mentioned electronic equipment includes an image processing circuit, which can be implemented by hardware and/or software components, and can include various processing units that define an ISP (Image Signal Processing, image signal processing) pipeline. Fig. 1 is a schematic diagram of an image processing circuit in an embodiment. As shown in FIG. 1, for ease of description, only various aspects of the image processing technology related to the embodiments of the present application are shown.
如图1所示,图像处理电路包括ISP处理器140和控制逻辑器150。成像设备110捕捉的图像数据首先由ISP处理器140处理,ISP处理器140对图像数据进行分析以捕捉可用于确定和/或成像设备110的一个或多个控制参数的图像统计信息。成像设备110可包括具有一个或多个透镜112和图像传感器114的照相机。图像传感器114可包括色彩滤镜阵列(如Bayer滤镜),图像传感器114可获取用图像传感器114的每个成像像素捕捉的光强度和波长信息,并提供可由ISP处理器140处理的一组原始图像数据。姿态传感器120(如三轴陀螺仪、霍尔传感器、加速度计)可基于姿态传感器120接口类型把采集的图像处理的参数(如防抖参数)提供给ISP处理器140。姿态传感器120接口可以利用SMIA(Standard Mobile Imaging Architecture,标准移动成像架构)接口、其它串行或并行照相机接口或上述接口的组合。As shown in FIG. 1, the image processing circuit includes an ISP processor 140 and a control logic 150. The image data captured by the imaging device 110 is first processed by the ISP processor 140, and the ISP processor 140 analyzes the image data to capture image statistics that can be used to determine and/or one or more control parameters of the imaging device 110. The imaging device 110 may include a camera having one or more lenses 112 and an image sensor 114. The image sensor 114 may include a color filter array (such as a Bayer filter). The image sensor 114 may obtain the light intensity and wavelength information captured by each imaging pixel of the image sensor 114, and provide a set of raw materials that can be processed by the ISP processor 140. Image data. The attitude sensor 120 (such as a three-axis gyroscope, a Hall sensor, and an accelerometer) can provide the collected image processing parameters (such as anti-shake parameters) to the ISP processor 140 based on the interface type of the attitude sensor 120. The interface of the attitude sensor 120 may use an SMIA (Standard Mobile Imaging Architecture) interface, other serial or parallel camera interfaces, or a combination of the foregoing interfaces.
此外,图像传感器114也可将原始图像数据发送给姿态传感器120,传感器120可基于姿态传感器120接口类型把原始图像数据提供给ISP处理器140,或者姿态传感器120将原始图像数据存储到图像存储器130中。In addition, the image sensor 114 may also send the original image data to the posture sensor 120. The sensor 120 can provide the original image data to the ISP processor 140 based on the interface type of the posture sensor 120, or the posture sensor 120 can store the original image data in the image memory 130. in.
ISP处理器140按多种格式逐个像素地处理原始图像数据。例如,每个图像像素可具有8、10、12或14比特的位深度,ISP处理器140可对原始图像数据进行一个或多个图像处理操作、收集关于图像数据的统计信息。其中,图像处理操作可按相同或不同的位深度精度进行。The ISP processor 140 processes the original image data pixel by pixel in a variety of formats. For example, each image pixel may have a bit depth of 8, 10, 12, or 14 bits, and the ISP processor 140 may perform one or more image processing operations on the original image data, and collect statistical information about the image data. Among them, the image processing operations can be performed with the same or different bit depth accuracy.
ISP处理器140还可从图像存储器130接收图像数据。例如,姿态传感器120接口将原始图像数据发送给图像存储器130,图像存储器130中的原始图像数据再提供给ISP处理器140以供处理。图像存储器130可为存储器装置的一部分、存储设备、或电子设备内的独立的专用存储器,并可包括DMA(Direct Memory Access,直接直接存储器存取)特征。The ISP processor 140 may also receive image data from the image memory 130. For example, the posture sensor 120 interface sends the original image data to the image memory 130, and the original image data in the image memory 130 is then provided to the ISP processor 140 for processing. The image memory 130 may be a part of a memory device, a storage device, or an independent dedicated memory in an electronic device, and may include DMA (Direct Memory Access) features.
当接收到来自图像传感器114接口或来自姿态传感器120接口或来自图像存储器130的原始图像数据时,ISP处理器140可进行一个或多个图像处理操作,如时域滤波。处理后的图像数据可发送给图像存储器130,以便在被显示之前进行另外的处理。ISP处理器140从图像存储器130接收处理数据,并对所述处理数据进行原始域中以及RGB和YCbCr颜色空间中的图像数据处理。ISP处理器140处理后的图像数据可输出给显示器160,以供用户观看和/或由图形引擎或GPU(Graphics Processing Unit,图形处理器)进一步处理。此外,ISP处理器140的输出还可发送给图像存储器130,且显示器160可从图像存储器130读取图像数据。在一个实施例中,图像存储器130可被配置为实现一个或多个帧缓冲器。When receiving raw image data from the image sensor 114 interface or from the posture sensor 120 interface or from the image memory 130, the ISP processor 140 may perform one or more image processing operations, such as temporal filtering. The processed image data can be sent to the image memory 130 for additional processing before being displayed. The ISP processor 140 receives the processed data from the image memory 130, and performs image data processing in the original domain and in the RGB and YCbCr color spaces on the processed data. The image data processed by the ISP processor 140 may be output to the display 160 for viewing by the user and/or further processed by a graphics engine or a GPU (Graphics Processing Unit, graphics processor). In addition, the output of the ISP processor 140 can also be sent to the image memory 130, and the display 160 can read image data from the image memory 130. In one embodiment, the image memory 130 may be configured to implement one or more frame buffers.
ISP处理器140确定的统计数据可发送给控制逻辑器150单元。例如,统计数据可包括陀螺仪的振动频率、自动曝光、自动白平衡、自动聚焦、闪烁检测、黑电平补偿、透镜112阴影校正等图像传感器114统计信息。控制逻辑器150可包括执行一个或多个例程(如固件)的处理器和/或微控制器,一个或多个例程可根据接收的统计数据,确定成像设备110的控制参数及ISP处理器140的控制参数。例如,成像设备110的控制参数可包括姿态传感器120控制参数(例如增益、曝光控制的积分时间、防抖参数等)、照相机闪光控制参数、照相机防抖位移参数、透镜112控制参数(例如聚焦或变焦用焦距)、或这些参数的组合。ISP控制参数可包括用于自动白平衡和颜色调整(例如,在RGB处理期间)的增益水平和色彩校正矩阵,以及透镜112阴影校正参数。The statistical data determined by the ISP processor 140 may be sent to the control logic 150 unit. For example, the statistical data may include image sensor 114 statistical information such as the vibration frequency of the gyroscope, automatic exposure, automatic white balance, automatic focus, flicker detection, black level compensation, and lens 112 shadow correction. The control logic 150 may include a processor and/or a microcontroller that executes one or more routines (such as firmware). The one or more routines can determine the control parameters and ISP processing of the imaging device 110 based on the received statistical data. The control parameters of the device 140. For example, the control parameters of the imaging device 110 may include attitude sensor 120 control parameters (such as gain, integration time of exposure control, anti-shake parameters, etc.), camera flash control parameters, camera anti-shake displacement parameters, lens 112 control parameters (such as focus or Zoom focal length), or a combination of these parameters. The ISP control parameters may include gain levels and color correction matrices for automatic white balance and color adjustment (for example, during RGB processing), and lens 112 shading correction parameters.
在一个实施例中,成像设备(照相机)中的图像传感器114可以包括阵列排布的多个像素点组,其中,每个像素点组包括阵列排布的多个像素点,每个像素点包括阵列排布的多个子像素点。In one embodiment, the image sensor 114 in the imaging device (camera) may include a plurality of pixel point groups arranged in an array, wherein each pixel point group includes a plurality of pixel points arranged in an array, and each pixel point includes Multiple sub-pixels arranged in an array.
通过成像设备(照相机)110中的透镜112和图像传感器114获取第一图像,并将第一图像发 送至ISP处理器140。ISP处理器140接收到第一图像后,可以对第一图像进行主体检测,得到第一图像中的感兴趣区域,也可以通过获取用户所选中的区域作为感兴趣区域,还可以通过其他方式获取感兴趣区域,不限于此。The first image is acquired through the lens 112 and the image sensor 114 in the imaging device (camera) 110, and the first image is sent to the ISP processor 140. After receiving the first image, the ISP processor 140 can perform subject detection on the first image to obtain the region of interest in the first image, or obtain the region selected by the user as the region of interest, or obtain it in other ways The area of interest is not limited to this.
ISP处理器140,用于获取预览图像,将预览图像划分为至少两个子区域,获取至少两个子区域中每个子区域对应的相位差,根据每个子区域对应的相位差确定至少两个目标相位差,至少两个目标相位差中包括目标前景相位差和目标背景相位差,根据每个目标相位差进行对焦,得到每个目标相位差对应的图像,根据每个目标相位差对应的图像进行合成,得到全准焦图像。ISP处理器140可将该目标子区域的相关信息如位置信息、轮廓信息等发送至控制逻辑器150。The ISP processor 140 is configured to obtain a preview image, divide the preview image into at least two sub-areas, obtain a phase difference corresponding to each of the at least two sub-areas, and determine at least two target phase differences according to the phase difference corresponding to each sub-area , The at least two target phase differences include the target foreground phase difference and the target background phase difference, focus is performed according to each target phase difference, and an image corresponding to each target phase difference is obtained, and the image corresponding to each target phase difference is synthesized, Obtain a fully in-focus image. The ISP processor 140 may send relevant information of the target sub-region, such as location information, contour information, etc., to the control logic 150.
控制逻辑器150接收到目标区域的相关信息之后,控制成像设备(照相机)中的透镜112进行移动,从而对焦至目标区域对应的实际场景中的位置上。After receiving the relevant information of the target area, the control logic 150 controls the lens 112 in the imaging device (camera) to move, so as to focus on the position in the actual scene corresponding to the target area.
图2为一个实施例中图像处理方法的流程图。如图2所示,一种图像处理方法,应用于电子设备,包括操作202至操作212。Fig. 2 is a flowchart of an image processing method in an embodiment. As shown in FIG. 2, an image processing method applied to an electronic device includes operation 202 to operation 212.
操作202,获取预览图像。In operation 202, a preview image is obtained.
其中,电子设备的摄像头数量不限。例如可以是1个、2个…不限于此。摄像头设置于电子设备的形式不限,例如,可以是内置于电子设备的摄像头,也可以外置于电子设备的摄像头。可以是前置摄像头,也可以是后置摄像头。电子设备上的摄像头可以为任意类型的摄像头。例如,摄像头可以是彩色摄像头、黑白摄像头、深度摄像头、长焦摄像头、广角摄像头等,不限于此。Among them, the number of cameras of the electronic device is not limited. For example, it may be one or two... and it is not limited to this. The form of the camera installed in the electronic device is not limited. For example, it can be a camera built into the electronic device, or it can be an external camera of the electronic device. It can be a front camera or a rear camera. The camera on the electronic device can be any type of camera. For example, the camera may be a color camera, a black-and-white camera, a depth camera, a telephoto camera, a wide-angle camera, etc., but is not limited thereto.
预览图像可为可见光图像。预览图像是指摄像头未进行拍摄时呈现在电子设备屏幕上的图像。预览图像可为当前帧的预览图像。The preview image may be a visible light image. The preview image refers to the image presented on the screen of the electronic device when the camera is not shooting. The preview image can be the preview image of the current frame.
具体地,电子设备通过摄像头获取预览图像,并显示在显示屏上。Specifically, the electronic device obtains a preview image through a camera and displays it on the display screen.
操作204,将预览图像划分为至少两个子区域。In operation 204, the preview image is divided into at least two sub-areas.
其中,子区域是指预览图像中的一个图像区域。子区域为图像的一部分。即子区域包括该预览图像的一部分像素。将预览图像划分得到的各个子区域的大小和形状可以均相同,也可以均不同,还可以其中一种相同,另外一种不同。具体的划分方法不限定。Among them, the sub-region refers to an image region in the preview image. The sub-region is a part of the image. That is, the sub-region includes a part of the pixels of the preview image. The size and shape of each sub-region obtained by dividing the preview image may be the same or different, and one of them may be the same and the other may be different. The specific division method is not limited.
具体地,电子设备将预览图像划分为至少两个子区域。电子设备可以将预览图像划分为M×N个子区域。N和M均为正整数,N和M的值可以相同也可以不同。例如预览图像的像素为100×100,划分为4个子区域,那么每个子区域的像素为25×25。Specifically, the electronic device divides the preview image into at least two sub-areas. The electronic device may divide the preview image into M×N sub-areas. Both N and M are positive integers, and the values of N and M may be the same or different. For example, the pixel of the preview image is 100×100, and it is divided into 4 sub-regions, then the pixels of each sub-region are 25×25.
操作206,获取至少两个子区域中每个子区域对应的相位差。Operation 206: Obtain a phase difference corresponding to each of the at least two sub-regions.
其中,相位差是指从不同方向射入镜头的成像光线在图像传感器中所成的像在位置上的差异。Among them, the phase difference refers to the difference in the position of the image formed by the imaging light entering the lens from different directions in the image sensor.
具体地,电子设备包括图像传感器,该图像传感器可包括阵列排布的多个像素点组,每个像素点组包括阵列排布的M*N个像素点;每个像素点对应一个感光单元,其中,M和N均为大于或等于2的自然数。那么,每个子区域对应的相位差可包括第一相位差和第二相位差。第一相位差对应的第一方向与第二相位差对应的第二方向成预设夹角。其中,预设夹角可以是除了0度、180和360度之外的任意夹角。即每个子区域对应的相位差可以为两个。电子设备获取第一相位差的可信度和第二相位差的可信度;确定第二相位差的可信度和第二相位差的可信度中可信度较高的相位差;将可信度较高的相位差作为该子区域对应的相位差。Specifically, the electronic device includes an image sensor, the image sensor may include a plurality of pixel point groups arranged in an array, each pixel point group includes M*N pixel points arranged in an array; each pixel point corresponds to a photosensitive unit, Among them, both M and N are natural numbers greater than or equal to 2. Then, the phase difference corresponding to each sub-region may include the first phase difference and the second phase difference. The first direction corresponding to the first phase difference and the second direction corresponding to the second phase difference form a preset angle. Wherein, the preset included angle may be any included angle other than 0 degrees, 180 degrees, and 360 degrees. That is, the phase difference corresponding to each sub-region can be two. The electronic device acquires the credibility of the first phase difference and the credibility of the second phase difference; determines the credibility of the credibility of the second phase difference and the credibility of the credibility of the second phase difference; The phase difference with higher reliability is used as the phase difference corresponding to the sub-region.
在一个实施例中,为了进行相位检测自动对焦,通常可以在图像传感器包括的像素点中成对地设置一些相位检测像素点,也可称为遮蔽像素点,其中,每个相位检测像素点对中的一个相位检测像素点进行左侧遮挡,另一个相位检测像素点进行右侧遮挡,这样,就可以将射向每个相位检测像素点对的成像光束分离成左右两个部分,通过对比左右两部分成像光束所成的像,即可得到每个子窗口对应的相位差。In one embodiment, in order to perform phase detection auto-focusing, usually some phase detection pixels can be arranged in pairs in the pixels included in the image sensor, which can also be called shielded pixels, where each phase detection pixel is paired One phase detection pixel point is occluded on the left side, and the other phase detection pixel point is occluded on the right side. In this way, the imaging beam directed at each phase detection pixel point pair can be separated into two parts, left and right, by comparing the left and right parts. The phase difference corresponding to each sub-window can be obtained from the image formed by the two imaging beams.
操作208,从每个子区域对应的相位差中确定至少两个目标相位差,至少两个目标相位差中包括目标前景相位差和目标背景相位差。In operation 208, at least two target phase differences are determined from the phase differences corresponding to each sub-region, and the at least two target phase differences include the target foreground phase difference and the target background phase difference.
其中,前景是指在图像中深度较小的部分。前景中包含主体。前景一般是用户想要对焦的物体。至少两个目标相位差中包括前景相位差和背景相位差,还可以包括其他相位差。例如位于前景和背 景之间的相位差。Among them, the foreground refers to the part with smaller depth in the image. The foreground contains the subject. The foreground is generally the object that the user wants to focus on. The at least two target phase differences include the foreground phase difference and the background phase difference, and may also include other phase differences. For example, the phase difference between the foreground and the background.
具体地,电子设备从每个区域对应的相位差中确定至少两个目标相位差,至少两个目标相位差中至少包括目标前景相位差和目标背景相位差。Specifically, the electronic device determines at least two target phase differences from the phase differences corresponding to each area, and the at least two target phase differences include at least the target foreground phase difference and the target background phase difference.
操作210,根据每个目标相位差进行对焦,得到每个目标相位差对应的图像。In operation 210, focusing is performed according to the phase difference of each target, and an image corresponding to the phase difference of each target is obtained.
其中,对焦指的是通过电子设备的对焦机构变动物距和相距的位置,使被拍物成像清晰的过程。对焦可以是指自动对焦。自动对焦可以是指相位对焦(Phase Detection Auto Focus,PDAF)以及与相位对焦结合的其他自动对焦方式。相位对焦是通过传感器获取相位差,根据相位差计算离焦值,根据离焦值控制透镜移动对应的距离,从而达到合焦。相位对焦可结合其他的对焦方式,例如连续自动对焦、激光对焦等。Among them, focusing refers to the process of changing the distance and distance of the object through the focusing mechanism of the electronic device to make the image of the object clear. Focus can refer to auto focus. Auto focus may refer to Phase Detection Auto Focus (PDAF) and other auto focus methods combined with phase focus. Phase focusing is to obtain the phase difference through the sensor, calculate the defocus value according to the phase difference, and control the lens to move the corresponding distance according to the defocus value to achieve focus. Phase focus can be combined with other focus methods, such as continuous auto focus, laser focus, etc.
具体地,当接收到拍照指令时,电子设备根据至少两个目标相位差中每个目标相位差进行对焦,得到每个目标相位差对应的图像。电子设备可根据至少两个目标相位差中每个目标相位差,计算得到每个目标相位差对应的离焦值,根据每个离焦值控制透镜移动对应距离,得到每个目标相位差对应的图像。离焦值指的是图像传感器当前所处的位置与合焦状态时图像传感器所应该处于的位置的距离。每个相位差均有对应的离焦值。每个相位差对应的离焦值可以相同也可以不相同。相位差与离焦值的关系可通过预先标定得到。例如相位差与离焦值的关系可以是线性关系,也可以是非线性关系。Specifically, when receiving a photographing instruction, the electronic device performs focusing according to each target phase difference of the at least two target phase differences, and obtains an image corresponding to each target phase difference. The electronic device can calculate the defocus value corresponding to each target phase difference according to each target phase difference of at least two target phase differences, and control the lens to move the corresponding distance according to each defocus value to obtain the corresponding distance of each target phase difference. image. The defocus value refers to the distance between the current position of the image sensor and the position where the image sensor should be in the in-focus state. Each phase difference has a corresponding defocus value. The defocus value corresponding to each phase difference can be the same or different. The relationship between the phase difference and the defocus value can be obtained by pre-calibration. For example, the relationship between the phase difference and the defocus value may be a linear relationship or a nonlinear relationship.
例如,至少两个目标相位差包括目标相位差A、目标相位差B和目标相位差C。其中,目标相位差A为目标前景相位差,目标相位差C为目标背景相位差,目标相位差B为数值介于目标相位差A和目标相位差C之间的相位差。根据目标相位差A计算得到离焦值A,根据离焦值A控制透镜移动对应距离,得到目标相位差A对应的图像A。根据目标相位差B计算得到离焦值B,根据离焦值B控制透镜移动对应距离,得到目标相位差B对应的图像B。根据目标相位差C计算得到离焦值C,根据离焦值C控制透镜移动对应距离,得到目标相位差C对应的图像C。那么电子设备得到图像A、图像B和图像C。目标相位差的处理顺序不限。For example, the at least two target phase differences include target phase difference A, target phase difference B, and target phase difference C. Among them, the target phase difference A is the target foreground phase difference, the target phase difference C is the target background phase difference, and the target phase difference B is the phase difference between the target phase difference A and the target phase difference C. The defocus value A is calculated according to the target phase difference A, and the lens is moved by the corresponding distance according to the defocus value A to obtain the image A corresponding to the target phase difference A. The defocus value B is calculated according to the target phase difference B, and the lens is controlled to move the corresponding distance according to the defocus value B to obtain the image B corresponding to the target phase difference B. The defocus value C is calculated according to the target phase difference C, and the lens is moved by the corresponding distance according to the defocus value C to obtain the image C corresponding to the target phase difference C. Then the electronic device gets image A, image B, and image C. The processing order of the target phase difference is not limited.
操作212,根据每个目标相位差对应的图像,进行合成,得到全准焦图像。In operation 212, the image corresponding to each target phase difference is synthesized to obtain a fully in-focus image.
其中,全准焦图像是指理论上不存在失焦区域的图像。图像拼接是指将数张图像,其中该数张图像是可以是对焦在不同位置获得的,也可以是不同相位差对应的图像,拼成一幅无缝的全景图或高分辨率图像。Among them, the fully in-focus image refers to an image in which there is no out-of-focus area theoretically. Image stitching refers to combining several images, which can be obtained by focusing at different positions, or images corresponding to different phase differences, to form a seamless panoramic image or high-resolution image.
具体地,电子设备可将每个目标相位差对应的图像中清晰的部分进行拼接合成,得到全准焦图像。或者,电子设备可采用拉普拉斯金字塔方式根据每个目标相位差对应的图像进行,得到全准焦图像。或者,电子设备将每个目标相位差对应的图像输入至卷积神经网络模型中进行合成,得到全准焦图像等不限于此。Specifically, the electronic device may stitch and synthesize the clear parts of the image corresponding to each target phase difference to obtain a fully in-focus image. Alternatively, the electronic device may use the Laplacian pyramid method to perform the process according to the image corresponding to the phase difference of each target to obtain a fully in-focus image. Alternatively, the electronic device inputs the image corresponding to the phase difference of each target into the convolutional neural network model for synthesis to obtain a fully in-focus image and the like is not limited to this.
本实施例中的图像处理方法,获取预览图像,将预览图像划分为至少两个子区域,获取至少两个子区域中每个子区域对应的相位差,从每个子区域对应的相位差中确定至少两个目标相位差,至少两个目标相位差中包括目标前景相位差和目标背景相位差,根据每个目标相位差进行对焦,得到每个目标相位差对应的图像,能够获取至少两张处于不同焦点下的图像,其中一张是背景准焦图像,一张是前景准焦图像,根据每个目标相位差对应的图像进行合成,得到全准焦图像,能够得到失焦区域较少的图像,提高图像的清晰度。In the image processing method in this embodiment, a preview image is obtained, the preview image is divided into at least two sub-regions, the phase difference corresponding to each sub-region in the at least two sub-regions is obtained, and at least two are determined from the phase difference corresponding to each sub-region. Target phase difference, at least two target phase differences include the target foreground phase difference and the target background phase difference. Focusing is performed according to the phase difference of each target to obtain an image corresponding to the phase difference of each target, and at least two targets at different focal points can be acquired. One is the background in-focus image, and the other is the foreground in-focus image. The images corresponding to the phase difference of each target are synthesized to obtain a fully in-focus image, which can obtain an image with less out-of-focus area and improve the image The clarity.
在一个实施例中,图3为一个实施例中相位对焦的原理示意图。M1为电子设备处于合焦状态时,图像传感器所处的位置,其中,合焦状态指的是成功对焦的状态,请参考图3,当图像传感器位于M1位置时,由物体W反射向镜头Lens的不同方向上的成像光线g在图像传感器上会聚,也即是,由物体W反射向镜头Lens的不同方向上的成像光线g在图像传感器上的同一位置处成像,此时,图像传感器成像清晰。In an embodiment, FIG. 3 is a schematic diagram of the principle of phase focusing in an embodiment. M1 is the position of the image sensor when the electronic device is in the in-focus state. The in-focus state refers to the state of successful focusing. Please refer to Figure 3. When the image sensor is in the M1 position, the object W is reflected toward the lens Lens The imaging light g in different directions of the image converges on the image sensor, that is, the imaging light g in different directions reflected by the object W toward the lens Lens is imaged at the same position on the image sensor. At this time, the image of the image sensor is clear .
M2和M3为电子设备不处于合焦状态时,图像传感器所可能处于的位置,如图3所示,当图像传感器位于M2位置或M3位置时,由物体W反射向镜头Lens的不同方向上的成像光线g会在不同的位置成像。请参考图3,当图像传感器位于M2位置时,由物体W反射向镜头Lens的不同方向上 的成像光线g在位置A和位置B分别成像,当图像传感器位于M3位置时,由物体W反射向镜头Lens的不同方向上的成像光线g在位置C和位置D分别成像,此时,图像传感器成像不清晰。M2 and M3 are the possible positions of the image sensor when the electronic device is not in focus. As shown in Figure 3, when the image sensor is at the M2 position or the M3 position, the object W is reflected toward the lens Lens in different directions. The imaging light g will be imaged at different positions. Please refer to Figure 3, when the image sensor is at the M2 position, the imaging light g reflected by the object W in different directions to the lens Lens is imaged at the position A and the position B respectively. When the image sensor is at the M3 position, the object W is reflected toward The imaging rays g in different directions of the lens Lens are respectively imaged at the position C and the position D. At this time, the image of the image sensor is not clear.
在PDAF技术中,可以获取从不同方向射入镜头的成像光线在图像传感器中所成的像在位置上的差异,例如,如图3所示,可以获取位置A和位置B的差异,或者,获取位置C和位置D的差异;在获取到从不同方向射入镜头的成像光线在图像传感器中所成的像在位置上的差异之后,可以根据该差异以及摄像机中镜头与图像传感器之间的几何关系,得到离焦值,所谓离焦值指的是图像传感器当前所处的位置与合焦状态时图像传感器所应该处于的位置的距离;电子设备可以根据得到的离焦值进行对焦。In PDAF technology, the difference in position of the image formed by the imaging light entering the lens from different directions in the image sensor can be obtained. For example, as shown in Figure 3, the difference between position A and position B can be obtained, or, Obtain the difference between position C and position D; after obtaining the difference in position of the image formed by the imaging light entering the lens from different directions in the image sensor, the difference and the difference between the lens and the image sensor in the camera The geometric relationship is used to obtain the defocus value. The so-called defocus value refers to the distance between the current position of the image sensor and the position where the image sensor should be in the in-focus state; the electronic device can focus according to the obtained defocus value.
其中,通常可以将“从不同方向射入镜头的成像光线在图像传感器中所成的像在位置上的差异”称为相位差(Phase Difference)。根据上述说明可知,在PDAF技术中,获取相位差是一个非常关键的技术环节。Among them, the "difference in the position of the image formed by the imaging light entering the lens from different directions on the image sensor" can generally be referred to as a phase difference. According to the above description, in PDAF technology, obtaining the phase difference is a very critical technical link.
需要指出的是,实际应用中,相位差可以应用于多种不同的场景,对焦场景仅仅是一种比较可能的场景。例如,可以将相位差应用于深度图的获取场景中,也即是,可以利用相位差获取深度图;又例如,可以将相位差用于三维图像的重构场景中,也即是,可以利用相位差实现三维图像的重构。本申请实施例旨在提供一种获取相位差的方法,至于在获取到相位差之后,将该相位差应用于何种场景,本申请实施例不做具体限定。It should be pointed out that in practical applications, the phase difference can be applied to a variety of different scenes, and the focus scene is only a relatively possible scene. For example, the phase difference can be applied to the acquisition scene of the depth map, that is, the phase difference can be used to acquire the depth map; for another example, the phase difference can be used in the reconstruction scene of the three-dimensional image, that is, you can use The phase difference realizes the reconstruction of the three-dimensional image. The embodiment of the present application aims to provide a method for obtaining the phase difference. As for the scene to which the phase difference is applied after the phase difference is obtained, the embodiment of the present application does not specifically limit it.
相关技术中,可以在图像传感器包括的像素点中成对地设置一些相位检测像素点,请参考图4。图4为一个实施例中在图像传感器包括的像素点中成对地设置相位检测像素点的示意图。如图4所示,图像传感器中可以设置有相位检测像素点对(以下称为像素点对)A,像素点对B和像素点对C。其中,在每个像素点对中,一个相位检测像素点进行左侧遮挡(英文:Left Shield),另一个相位检测像素点进行右侧遮挡(英文:Right Shield)。In the related art, some phase detection pixels may be provided in pairs among the pixels included in the image sensor, please refer to FIG. 4. FIG. 4 is a schematic diagram of phase detection pixel points arranged in pairs in the pixel points included in the image sensor in an embodiment. As shown in FIG. 4, a phase detection pixel point pair (hereinafter referred to as a pixel point pair) A, a pixel point pair B, and a pixel point pair C may be provided in the image sensor. Among them, in each pixel point pair, one phase detection pixel is subjected to left shielding (English: Left Shield), and the other phase detection pixel is subjected to right shielding (English: Right Shield).
对于进行了左侧遮挡的相位检测像素点而言,射向该相位检测像素点的成像光束中仅有右侧的光束才能在该相位检测像素点的感光部分(也即是未被遮挡的部分)上成像,对于进行了右侧遮挡的相位检测像素点而言,射向该相位检测像素点的成像光束中仅有左侧的光束才能在该相位检测像素点的感光部分(也即是未被遮挡的部分)上成像。这样,就可以将成像光束分为左右两个部分,通过对比左右两部分成像光束所成的像,即可得到相位差。采用图4方式的对焦方式是通过传感器获取相位差,根据相位差计算离焦值,根据离焦值控制透镜移动,再寻找聚焦值(Focus Value,简称FV)峰值。For the phase detection pixel point that has been blocked on the left, only the right beam of the imaging beam directed to the phase detection pixel point can be in the photosensitive part of the phase detection pixel point (that is, the part that is not blocked). ). For the phase detection pixel that has been occluded on the right, only the left beam of the imaging beam directed at the phase detection pixel can be in the photosensitive part of the phase detection pixel (that is, not The occluded part) is imaged. In this way, the imaging beam can be divided into left and right parts, and the phase difference can be obtained by comparing the images formed by the left and right parts of the imaging beam. The focusing method using the Figure 4 method is to obtain the phase difference through the sensor, calculate the defocus value according to the phase difference, control the lens movement according to the defocus value, and then find the focus value (FV) peak value.
在一个实施例中,从每个子区域对应的相位差中确定至少两个目标相位差,至少两个目标相位差中包括目标前景相位差和目标背景相位差,包括:In one embodiment, determining at least two target phase differences from the phase differences corresponding to each sub-region, and the at least two target phase differences include the target foreground phase difference and the target background phase difference, including:
操作(a1),将至少两个子区域中每个子区域对应的相位差划分为前景相位差集合和背景相位差集合。In operation (a1), the phase difference corresponding to each of the at least two sub-regions is divided into a foreground phase difference set and a background phase difference set.
其中,前景相位差集合中包括至少一个前景相位差。背景相位差集合中包括至少一个背景相位差。Wherein, the foreground phase difference set includes at least one foreground phase difference. The background phase difference set includes at least one background phase difference.
具体地,在电子设备中可存储相位差阈值。将大于相位差阈值的相位差划分为背景相位差集合,将小于或等于相位差阈值的相位差划分为前景相位差集合。Specifically, the phase difference threshold may be stored in the electronic device. The phase difference greater than the phase difference threshold is divided into a background phase difference set, and the phase difference less than or equal to the phase difference threshold is divided into a foreground phase difference set.
本实施例中,电子设备根据每个子区域对应的相位差计算得到相位差中位数。将大于相位差中位数的相位差划分为背景相位差集合,将小于或等于相位差中位数的相位差划分为前景相位差集合。In this embodiment, the electronic device calculates the median phase difference according to the phase difference corresponding to each sub-region. The phase difference greater than the median of the phase difference is divided into the background phase difference set, and the phase difference less than or equal to the median of the phase difference is divided into the foreground phase difference set.
操作(a2),获取前景相位差集合对应的第一相位差均值。Operation (a2) is to obtain the first mean value of the phase difference corresponding to the foreground phase difference set.
具体地,电子设备根据前景相位差集合中的相位差求取平均值,得到第一相位差均值。Specifically, the electronic device obtains an average value according to the phase difference in the foreground phase difference set to obtain the first average value of the phase difference.
操作(a3),获取背景相位差集合对应的第二相位差均值。Operation (a3) is to obtain the second mean value of phase difference corresponding to the background phase difference set.
具体地,电子设备根据背景相位差集合中的相位差求取平均值,得到第二相位差均值。Specifically, the electronic device obtains the average value according to the phase difference in the background phase difference set to obtain the second average value of the phase difference.
操作(a4),将第一相位差均值作为目标前景相位差。In operation (a4), the first average value of the phase difference is used as the target foreground phase difference.
具体地,无论第一相位差均值是否与至少两个子区域中任一相位差相同,电子设备将第一相位 差均值作为目标前景相位差。Specifically, regardless of whether the first average value of phase difference is the same as any one of the at least two sub-regions, the electronic device uses the first average value of phase difference as the target foreground phase difference.
当第一相位差均值与至少两个子区域中任一相位差均不相同时,根据第一相位差均值计算得到对应的第一离焦值,根据第一离焦值控制透镜移动对应距离,得到第一相位差均值对应的图像。When the first average value of phase difference is different from any phase difference in the at least two sub-regions, the corresponding first defocus value is calculated according to the first average value of phase difference, and the lens is controlled to move the corresponding distance according to the first defocus value to obtain The image corresponding to the first mean value of the phase difference.
当第一相位差均值与至少两个子区域中任一相位差相同时,将第一相位差均值对应的区域作为对焦区域进行对焦,得到第一相位差均值对应的图像。When the first average value of phase difference is the same as any one of the at least two sub-areas, the area corresponding to the first average value of phase difference is used as the focus area for focusing, and an image corresponding to the first average value of phase difference is obtained.
操作(a5),将第二相位差均值作为目标背景相位差。In operation (a5), the second average phase difference is used as the target background phase difference.
具体地,无论第二相位差均值是否与至少两个子区域中任一相位差相同,电子设备将第二相位差均值作为目标前景相位差。Specifically, regardless of whether the second average value of the phase difference is the same as any one of the at least two sub-regions, the electronic device uses the second average value of the phase difference as the target foreground phase difference.
当第二相位差均值与至少两个子区域中任一相位差均不相同时,根据第二相位差均值计算得到对应的第二离焦值,根据第二离焦值控制透镜移动对应距离,得到第二相位差均值对应的图像。When the second average value of phase difference is different from any phase difference in the at least two sub-regions, the corresponding second defocus value is calculated according to the second average value of phase difference, and the lens is controlled to move the corresponding distance according to the second defocus value to obtain The image corresponding to the second mean value of the phase difference.
当第二相位差均值与至少两个子区域中任一相位差相同时,将第二相位差均值对应的区域作为对焦区域进行对焦,得到第二相位差均值对应的图像。When the second average value of phase difference is the same as any one of the at least two sub-areas, the area corresponding to the second average value of phase difference is used as the focus area for focusing to obtain an image corresponding to the second average value of phase difference.
本实施例中的图像处理方法,将至少两个子区域中每个子区域对应的相位差划分为前景相位差集合和后景背景相位差集合,获取前景相位差集合对应的第一相位差均值,获取后景背景相位差集合对应的第二相位差均值,将第一相位差均值作为目标前景相位差,将第二相位差均值作为目标背景相位差,能够根据均值获取前景准焦图像和背景准焦图像,提高图像的清晰度。In the image processing method in this embodiment, the phase difference corresponding to each of the at least two sub-regions is divided into a foreground phase difference set and a background background phase difference set, and the first phase difference average value corresponding to the foreground phase difference set is obtained, and The second average value of phase difference corresponding to the background phase difference set of the background, the first average value of phase difference is taken as the target foreground phase difference, and the second average value of phase difference is taken as the target background phase difference, the foreground in-focus image and background in-focus image can be obtained according to the average value Image, improve the clarity of the image.
在一个实施例中,该图像处理方法还包括:排除子区域对应的相位差中的最大相位差,得到剩余相位差集合;将至少两个子区域中每个子区域对应的相位差划分为前景相位差集合和背景相位差集合,包括:将剩余相位差集合划分为前景相位差集合和后景背景相位差集合。In an embodiment, the image processing method further includes: excluding the maximum phase difference among the phase differences corresponding to the sub-regions to obtain a residual phase difference set; and dividing the phase difference corresponding to each sub-region in the at least two sub-regions into the foreground phase difference The set and the background phase difference set include: dividing the remaining phase difference set into a foreground phase difference set and a background background phase difference set.
具体地,子区域对应的相位差中的最大相位差对应的区域即为预览图像中最远处景物对应的区域。排除子区域对应的相位差中的最大相位差即为排除预览图像中最远处景物对应的相位差。Specifically, the area corresponding to the largest phase difference among the phase differences corresponding to the sub-regions is the area corresponding to the farthest scene in the preview image. Excluding the maximum phase difference among the phase differences corresponding to the sub-regions is to exclude the phase difference corresponding to the farthest scene in the preview image.
在电子设备中可存储相位差阈值。将剩余相位差集合中大于相位差阈值的相位差划分为背景相位差集合,将小于或等于相位差阈值的相位差划分为前景相位差集合。The phase difference threshold can be stored in the electronic device. The phase difference that is greater than the phase difference threshold in the remaining phase difference set is divided into a background phase difference set, and the phase difference that is less than or equal to the phase difference threshold is divided into a foreground phase difference set.
本实施例中,电子设备根据剩余相位差集合中的相位差计算得到相位差中位数。将大于相位差中位数的相位差划分为背景相位差集合,将小于或等于相位差中位数的相位差划分为前景相位差集合。In this embodiment, the electronic device calculates the median phase difference according to the phase difference in the remaining phase difference set. The phase difference greater than the median of the phase difference is divided into the background phase difference set, and the phase difference less than or equal to the median of the phase difference is divided into the foreground phase difference set.
本实施例中,电子设备根据剩余相位差集合中的相位差计算得到相位差均值。将大于相位差均值的相位差划分为背景相位差集合,将小于或等于相位差均值的相位差划分为前景相位差集合。In this embodiment, the electronic device calculates the average value of the phase difference according to the phase difference in the remaining phase difference set. The phase difference greater than the average phase difference is divided into a background phase difference set, and the phase difference less than or equal to the average phase difference is divided into a foreground phase difference set.
本实施例中的图像处理方法,由于最远的背景细节往往不重要,排除子区域对应的相位差中的最大相位差,得到剩余相位差集合,能够排除掉最远的背景,将剩余相位差集合划分为前景相位差集合和后景背景相位差集合,根据均值进行对焦,能够提高图像的清晰度。In the image processing method in this embodiment, since the furthest background details are often not important, the largest phase difference in the phase difference corresponding to the sub-region is excluded to obtain the remaining phase difference set, which can eliminate the farthest background and reduce the remaining phase difference. The set is divided into a foreground phase difference set and a background background phase difference set. Focusing based on the average value can improve the clarity of the image.
在一个实施例中,从每个子区域对应的相位差中确定至少两个目标相位差,至少两个目标相位差中包括前景相位差和背景相位差,包括:获取至少两个子区域的相位差中的最大相位差以及最小相位差;将最小相位差作为前景相位差;将最大相位差作为背景相位差。In one embodiment, determining at least two target phase differences from the phase differences corresponding to each sub-region, and the at least two target phase differences include the foreground phase difference and the background phase difference, including: obtaining the phase difference of the at least two sub-regions The maximum phase difference and the minimum phase difference; the minimum phase difference is regarded as the foreground phase difference; the maximum phase difference is regarded as the background phase difference.
其中,最大相位差对应的区域为最远景物对应的区域。最小相位差对应的区域为最近景物对应的区域。最小相位差对应的区域中通常有目标主体。Among them, the area corresponding to the largest phase difference is the area corresponding to the most distant object. The area corresponding to the smallest phase difference is the area corresponding to the nearest scene. There is usually a target subject in the area corresponding to the smallest phase difference.
本实施例中的图像处理方法,获取至少两个子区域的相位差中的最大相位差以及最小相位差,将最小相位差作为前景相位差,将最大相位差作为背景相位差,能够仅获取两张图像进行合成,在提高图像清晰度的同时提高图像处理效率。The image processing method in this embodiment obtains the maximum phase difference and the minimum phase difference among the phase differences of at least two sub-regions, and uses the minimum phase difference as the foreground phase difference and the maximum phase difference as the background phase difference, so that only two The image is synthesized to improve image processing efficiency while improving image clarity.
在一个实施例中,电子设备包括图像传感器,图像传感器包括阵列排布的多个像素点组,每个像素点组包括阵列排布的M*N个像素点;每个像素点对应一个感光单元,其中,M和N均为大于或等于2的自然数。In one embodiment, the electronic device includes an image sensor, the image sensor includes a plurality of pixel point groups arranged in an array, and each pixel point group includes M*N pixel points arranged in an array; each pixel point corresponds to a photosensitive unit , Where both M and N are natural numbers greater than or equal to 2.
本实施例中,图5为一个实施例中电子设备的部分结构示意图。如图5所示,该电子设备可以包括镜头502和图像传感器504,其中,镜头502可以由一系列透镜组成,图像传感器504可以为金属氧化物半导体元件(英文:Complementary Metal Oxide Semiconductor;简称:CMOS)图像 传感器、电荷耦合元件(英文:Charge-coupled Device;简称:CCD)、量子薄膜传感器或者有机传感器等。In this embodiment, FIG. 5 is a schematic diagram of a part of the structure of an electronic device in an embodiment. As shown in FIG. 5, the electronic device may include a lens 502 and an image sensor 504, where the lens 502 may be composed of a series of lenses, and the image sensor 504 may be a metal oxide semiconductor device (English: Complementary Metal Oxide Semiconductor; abbreviation: CMOS) ) Image sensor, charge-coupled device (English: Charge-coupled Device; abbreviation: CCD), quantum thin film sensor or organic sensor, etc.
请参考图6,其示出了图像传感器504的一部分的结构示意图,如图6所示,该图像传感器504可以包括阵列排布的多个像素点组Z,其中,每个像素点组Z包括阵列排布的多个像素点D,每个像素点包括阵列排布的多个子像素点d。请参考图6,可选的,每个像素点组Z可以包括按照两行两列的阵列排布方式进行排布的4个像素点D,每个像素点可以包括按照两行两列的阵列排布方式进行排布的4个子像素点d。Please refer to FIG. 6, which shows a schematic structural diagram of a part of the image sensor 504. As shown in FIG. 6, the image sensor 504 may include a plurality of pixel point groups Z arranged in an array, wherein each pixel point group Z includes There are a plurality of pixel points D arranged in an array, and each pixel point includes a plurality of sub-pixel points d arranged in an array. Please refer to FIG. 6, optionally, each pixel point group Z may include 4 pixels D arranged in an array arrangement of two rows and two columns, and each pixel point may include an array of two rows and two columns. 4 sub-pixel points d arranged in an arrangement manner.
需要指出的是,图像传感器504包括的像素点指的是一个感光单元,该感光单元可以由多个阵列排布的感光元件(也即是子像素点)组成,其中,感光元件是一种能够将光信号转化为电信号的元件。可选的,感光单元还可以包括微透镜以及滤光片等,其中,微透镜设置于滤光片之上,滤光片设置于感光单元包括的各个感光元件之上,滤光片可以包括红、绿、蓝三种,分别只能透过红色、绿色、蓝色对应波长的光线。It should be pointed out that the pixel points included in the image sensor 504 refer to a photosensitive unit, which may be composed of a plurality of photosensitive elements (that is, sub-pixels) arranged in an array, wherein the photosensitive element is a kind of An element that converts optical signals into electrical signals. Optionally, the photosensitive unit may further include a microlens and a filter, etc., wherein the microlens is disposed on the filter, the filter is disposed on each photosensitive element included in the photosensitive unit, and the filter may include red , Green, and blue, which can only transmit light of the corresponding wavelengths of red, green, and blue respectively.
图7为一个实施例中像素点的结构示意图。如图7所示,以每个像素点包括子像素点1、子像素点2、子像素点3和子像素点4为例,可将子像素点1和子像素点2合成,子像素点3和子像素点4合成,形成上下方向的PD像素对,得到垂直方向的相位差,可以检测水平边缘;将子像素点1和子像素点3合成,子像素点2和子像素点4合成,形成左右方向的PD像素对,得到水平方向的相位差,可以检测竖直边缘。FIG. 7 is a schematic diagram of the structure of pixels in an embodiment. As shown in Figure 7, taking each pixel point including sub-pixel point 1, sub-pixel point 2, sub-pixel point 3, and sub-pixel point 4 as an example, sub-pixel point 1 and sub-pixel point 2 can be combined, and sub-pixel point 3 and sub-pixel point Pixel 4 is synthesized to form a pair of PD pixels in the vertical direction to obtain the phase difference in the vertical direction, which can detect horizontal edges; sub-pixel point 1 and sub-pixel point 3 are synthesized, and sub-pixel point 2 and sub-pixel point 4 are synthesized to form a left-right direction. The PD pixel pair obtains the phase difference in the horizontal direction and can detect the vertical edge.
图8为一个实施例中图像传感器的内部结构示意图,成像设备包括透镜和图像传感器。如图8所示,该图像传感器包括微透镜80、滤光片82和感光单元84。微透镜80、滤光片82和感光单元84依次位于入射光路上,即微透镜80设置在滤光片82之上,滤光片82设置在感光单元84上。FIG. 8 is a schematic diagram of the internal structure of an image sensor in an embodiment. The imaging device includes a lens and an image sensor. As shown in FIG. 8, the image sensor includes a micro lens 80, a filter 82 and a photosensitive unit 84. The micro lens 80, the filter 82 and the photosensitive unit 84 are sequentially located on the incident light path, that is, the micro lens 80 is disposed on the filter 82, and the filter 82 is disposed on the photosensitive unit 84.
滤光片82可包括红、绿、蓝三种,分别只能透过红色、绿色、蓝色对应波长的光线。一个滤光片82设置在一个像素点上。The filter 82 may include three types of red, green, and blue, and can only transmit light of corresponding wavelengths of red, green, and blue, respectively. One filter 82 is arranged on one pixel point.
微透镜80用于接收入射光,并将入射光传输给滤光片82。滤光片82对入射光进行平滑处理后,将平滑处理后的光以像素为基础入射到感光单元84上。The micro lens 80 is used to receive incident light and transmit the incident light to the filter 82. After the filter 82 smoothes the incident light, the smoothed light is incident on the photosensitive unit 84 on a pixel basis.
图像传感器中的感光单元84通过光电效应将从滤光片82入射的光转换成电荷信号,并生成与电荷信号一致的像素信号。电荷信号与接收的光强度相一致。The photosensitive unit 84 in the image sensor converts the light incident from the filter 82 into a charge signal through the photoelectric effect, and generates a pixel signal consistent with the charge signal. The charge signal is consistent with the received light intensity.
由上文说明可知,图像传感器包括的像素点与图像包括的像素是两个不同的概念,其中,图像包括的像素指的是图像的最小组成单元,其一般由一个数字序列进行表示,通常情况下,可以将该数字序列称为像素的像素值。本申请实施例对“图像传感器包括的像素点”以及“图像包括的像素”两个概念均有所涉及,为了方便读者理解,在此进行简要的解释。It can be seen from the above description that the pixels included in the image sensor and the pixels included in the image are two different concepts. The pixels included in the image refer to the smallest component unit of the image, which is generally represented by a sequence of numbers. In the following, the sequence of numbers can be referred to as the pixel value of a pixel. The embodiments of the present application involve both concepts of "pixels included in an image sensor" and "pixels included in an image". To facilitate readers' understanding, a brief explanation is provided here.
请参考图9,其示出了一种示例性的像素点组Z的示意图,如图9所示,像素点组Z包括按照两行两列的阵列排布方式进行排布的4个像素点D,其中,第一行第一列的像素点的颜色通道为绿色,也即是,第一行第一列的像素点包括的滤光片为绿色滤光片,第一行第二列的像素点的颜色通道为红色,也即是,第一行第二列的像素点包括的滤光片为红色滤光片,第二行第一列的像素点的颜色通道为蓝色,也即是,第二行第一列的像素点包括的滤光片为蓝色滤光片,第二行第二列的像素点的颜色通道为绿色,也即是,第二行第二列的像素点包括的滤光片为绿色滤光片。Please refer to FIG. 9, which shows a schematic diagram of an exemplary pixel point group Z. As shown in FIG. 9, the pixel point group Z includes 4 pixels arranged in an array arrangement of two rows and two columns. D, where the color channel of the pixels in the first row and the first column is green, that is, the filters included in the pixels in the first row and the first column are green filters, and the pixels in the first row and the second column are green filters. The color channel of the pixel is red, that is, the filter included in the pixel in the first row and second column is a red filter, and the color channel of the pixel in the second row and first column is blue, that is, Yes, the pixel in the second row and first column includes a blue filter, and the color channel of the pixel in the second row and second column is green, that is, the pixel in the second row and second column The filter included in the dot is a green filter.
在一个实施例中,电子设备包括图像传感器,图像传感器包括阵列排布的多个像素点组,每个像素点组包括阵列排布的多个像素点。如图10所示,为一个实施例中获取每个子区域对应的相位差的流程示意图,获取至少两个子区域中每个子区域对应的相位差,包括:In one embodiment, the electronic device includes an image sensor, and the image sensor includes a plurality of pixel point groups arranged in an array, and each pixel point group includes a plurality of pixel points arranged in an array. As shown in FIG. 10, it is a schematic diagram of the process of obtaining the phase difference corresponding to each sub-region in an embodiment, and obtaining the phase difference corresponding to each sub-region in at least two sub-regions includes:
操作1002,根据每个像素点组包括的像素点的亮度值获取目标亮度图。In operation 1002, a target brightness map is obtained according to the brightness values of the pixel points included in each pixel point group.
其中,图像传感器的像素点的亮度值可以由该像素点包括的子像素点的亮度值来进行表征。换句话说,电子设备可以根据每个像素点组包括的像素点中子像素点的亮度值来获取该目标亮度图。“子像素点的亮度值”指的是该子像素点接收到的光信号的亮度值。Wherein, the brightness value of the pixel of the image sensor can be characterized by the brightness value of the sub-pixel included in the pixel. In other words, the electronic device can obtain the target brightness map according to the brightness values of the sub-pixels in the pixel points included in each pixel point group. The "brightness value of a sub-pixel" refers to the brightness value of the light signal received by the sub-pixel.
图像传感器包括的子像素点是一种能够将光信号转化为电信号的感光元件。因此,电子设备可以根据子像素点输出的电信号来获取该子像素点接收到的光信号的强度,根据子像素点接收到的光 信号的强度即可得到该子像素点的亮度值。The sub-pixel included in the image sensor is a photosensitive element that can convert light signals into electrical signals. Therefore, the electronic device can obtain the intensity of the light signal received by the sub-pixel according to the electrical signal output by the sub-pixel, and obtain the brightness value of the sub-pixel according to the intensity of the light signal received by the sub-pixel.
本申请实施例中的目标亮度图用于反映图像传感器中子像素点的亮度值,该目标亮度图可以包括多个像素,其中,目标亮度图中的每个像素的像素值均是根据图像传感器中子像素点的亮度值得到的。The target brightness map in the embodiment of the present application is used to reflect the brightness value of the sub-pixels in the image sensor. The target brightness map may include multiple pixels, wherein the pixel value of each pixel in the target brightness map is based on the image sensor Obtained from the brightness value of the neutron pixel.
操作1004,对目标亮度图进行切分处理,根据切分处理的结果得到第一切分亮度图和第二切分亮度图。In operation 1004, segmentation processing is performed on the target brightness map, and the first segmented brightness map and the second segmented brightness map are obtained according to the results of the segmentation processing.
具体地,电子设备可以沿列的方向(图像坐标系中的y轴方向)对该目标亮度图进行切分处理。沿列的方向对目标亮度图进行切分处理后得到的第一切分亮度图和第二切分亮度图可以分别称为左图和右图。Specifically, the electronic device may perform segmentation processing on the target brightness map along the direction of the column (the y-axis direction in the image coordinate system). The first segmented brightness map and the second segmented brightness map obtained after the target brightness map is segmented along the column direction can be called the left image and the right image, respectively.
本实施例中,电子设备可以沿行的方向(图像坐标系中的x轴方向)对该目标亮度图进行切分处理。沿行的方向对目标亮度图进行切分处理后得到的第一切分亮度图和第二切分亮度图可以分别称为上图和下图。In this embodiment, the electronic device may perform segmentation processing on the target brightness map along the row direction (the x-axis direction in the image coordinate system). The first segmented brightness map and the second segmented brightness map obtained after the target brightness map is segmented in the direction of the row can be referred to as the upper image and the lower image, respectively.
操作1006,根据第一切分亮度图和第二切分亮度图中相互匹配的像素的位置差异,确定相互匹配的像素的相位差。In operation 1006, the phase difference of the pixels that match each other is determined according to the position difference of the pixels that match each other in the first split brightness map and the second split brightness map.
具体地,以沿行的方向(图像坐标系中的x轴方向)对该目标亮度图进行切分处理为例,得到的第一亮度切分图和第二亮度切分图为上图和下图。那么电子设备根据第一亮度切分图和第二亮度切分图中相互匹配像素的位置差异,得到垂直相位差。Specifically, taking the segmentation processing of the target brightness map along the direction of the row (the x-axis direction in the image coordinate system) as an example, the obtained first brightness segmentation map and the second brightness segmentation map are the upper and lower images. Figure. Then, the electronic device obtains the vertical phase difference according to the position difference of the matching pixels in the first brightness segmentation map and the second brightness segmentation map.
以沿列的方向(图像坐标系中的y轴方向)对该目标亮度图进行切分处理为例,得到的第一亮度切分图和第二亮度切分图为左图和右图。那么电子设备根据第一亮度切分图和第二亮度切分图中相互匹配像素的位置差异,得到水平相位差。Taking the segmentation processing of the target brightness map along the column direction (the y-axis direction in the image coordinate system) as an example, the obtained first brightness segmentation map and the second brightness segmentation map are the left and right images. Then, the electronic device obtains the horizontal phase difference according to the position difference of the matching pixels in the first brightness segmentation map and the second brightness segmentation map.
其中,“相互匹配的像素”指的是由像素本身及其周围像素组成的像素矩阵相互相似。例如,第一切分亮度图中像素a和其周围的像素组成一个3行3列的像素矩阵,该像素矩阵的像素值为:Among them, "mutually matched pixels" means that the pixel matrix composed of the pixel itself and the surrounding pixels are similar to each other. For example, the pixel a and its surrounding pixels in the first segmented brightness map form a pixel matrix with 3 rows and 3 columns, and the pixel value of the pixel matrix is:
2 10 902 10 90
1 20 801 20 80
0 100 10 100 1
第二切分亮度图中像素b和其周围的像素也组成一个3行3列的像素矩阵,该像素矩阵的像素值为:The pixel b and the surrounding pixels in the second segmented brightness map also form a pixel matrix with 3 rows and 3 columns, and the pixel value of the pixel matrix is:
1 10 901 10 90
1 21 801 21 80
0 100 20 100 2
由上文可以看出,这两个矩阵是相似的,则可以认为像素a和像素b相互匹配。至于如何判断像素矩阵是否相似,实际应用中有许多不同的方法,一种较为常见的方法是,对两个像素矩阵中的每个对应的像素的像素值求差,再将求得的差值的绝对值进行相加,利用该相加的结果来判断像素矩阵是否相似,也即是,若该相加的结果小于预设的某一阈值,则认为像素矩阵相似,否则,则认为像素矩阵不相似。It can be seen from the above that the two matrices are similar, and it can be considered that the pixel a and the pixel b match each other. As for how to judge whether the pixel matrix is similar, there are many different methods in practical applications. A common method is to calculate the difference of the pixel value of each corresponding pixel in the two pixel matrices, and then calculate the difference. The absolute value of is added, and the result of the addition is used to determine whether the pixel matrix is similar, that is, if the result of the addition is less than a preset threshold, the pixel matrix is considered similar, otherwise, the pixel matrix is considered not similar.
例如,对于上述两个3行3列的像素矩阵而言,可以分别将1和2求差,将10和10求差,将90和90求差,……,再将求得的差的绝对值相加,得到相加结果为3,该相加结果3小于预设的阈值,则认为上述两个3行3列的像素矩阵相似。For example, for the above two pixel matrices with 3 rows and 3 columns, the difference of 1 and 2, the difference of 10 and 10, the difference of 90 and 90, ..., and then the absolute difference Values are added, and the result of the addition is 3. If the result of the addition of 3 is less than the preset threshold, it is considered that the two pixel matrices with 3 rows and 3 columns are similar.
另一种比较常见的判断像素矩阵是否相似的方法是利用sobel卷积核计算方式或者高拉普拉斯计算方式等方式提取其边缘特征,通过边缘特征来判断像素矩阵是否相似。Another common method for judging whether the pixel matrix is similar is to use the Sobel convolution kernel calculation method or the high Laplacian calculation method to extract the edge characteristics, and judge whether the pixel matrix is similar by the edge characteristics.
本实施例中,“相互匹配的像素的位置差异”指的是,相互匹配的像素中位于第一切分亮度图中的像素的位置和位于第二切分亮度图中的像素的位置的差异。如上述举例,相互匹配的像素a和像素b的位置差异指的是像素a在第一切分亮度图中的位置和像素b在第二切分亮度图中的位置的差异。In this embodiment, "the positional difference of pixels that match each other" refers to the difference between the positions of the pixels in the first split brightness map and the positions of the pixels in the second split brightness map among the matched pixels. . As in the above example, the position difference between the pixel a and the pixel b that are matched with each other refers to the difference between the position of the pixel a in the first split brightness map and the position of the pixel b in the second split brightness map.
相互匹配的像素分别对应于从不同方向射入镜头的成像光线在图像传感器中所成的不同的像。 例如,第一切分亮度图中的像素a与第二切分亮度图中的像素b相互匹配,其中,该像素a可以对应于图1中在A位置处所成的像,像素b可以对应于图1中在B位置处所成的像。The pixels that match each other correspond to different images in the image sensor formed by the imaging light entering the lens from different directions. For example, the pixel a in the first split brightness map and the pixel b in the second split brightness map match each other, where the pixel a may correspond to the image formed at position A in FIG. 1, and the pixel b may correspond to The image formed at position B in Figure 1.
由于相互匹配的像素分别对应于从不同方向射入镜头的成像光线在图像传感器中所成的不同的像,因此,根据相互匹配的像素的位置差异,即可确定该相互匹配的像素的相位差。Since the matched pixels correspond to the different images in the image sensor formed by the imaging light entering the lens from different directions, the phase difference of the matched pixels can be determined according to the position difference of the matched pixels. .
操作1008,根据相互匹配的像素的相位差确定至少两个子区域中每个子区域对应的相位差。In operation 1008, the phase difference corresponding to each of the at least two sub-regions is determined according to the phase difference of the pixels that are matched with each other.
具体地,电子设备根据相互匹配的相位差确定至少两个子区域中每个子区域对应的一个相位差。Specifically, the electronic device determines a phase difference corresponding to each of the at least two sub-regions according to the mutually matched phase differences.
本实施例中,电子设备根据相互匹配的像素的相位差可获取每个子区域对应的两个相位差,分别为垂直相位差和水平相位差。电子设备可获取每个子区域对应的垂直相位差置信度和水平相位差置信度,确定置信度最高的相位差,将该相位差作为每个子区域对应的一个相位差。In this embodiment, the electronic device can obtain two phase differences corresponding to each sub-region according to the phase differences of the pixels that are matched with each other, which are the vertical phase difference and the horizontal phase difference, respectively. The electronic device can obtain the vertical phase difference confidence level and the horizontal phase difference confidence level corresponding to each sub-region, determine the phase difference with the highest confidence level, and use the phase difference as a phase difference corresponding to each sub-region.
本实施例中的图像处理方法,通过根据图像传感器中每个像素点组包括的像素点的亮度值获取目标亮度图,在获取到目标亮度图后,对该目标亮度图进行切分处理,根据切分处理的结果得到第一切分亮度图和第二切分亮度图,接着,根据第一切分亮度图和第二切分亮度图中相互匹配的像素的位置差异,确定相互匹配的像素的相位差,而后,再根据相互匹配的像素的相位差确定至少两个子区域中每个子区域对应的相位差,这样,就可以利用图像传感器中每个像素点组包括的像素点的亮度值来获取该目标相位差图,因此,相较于利用稀疏设置的相位检测像素点来获取相位差的方式而言,本申请实施例中相互匹配的像素的相位差包含相对丰富的相位差信息,故而可以提高获取到的相位差精确度,从而在对焦时,能获得与对焦区域相对应的精确度高的相位差,可以不查找对焦峰值,提高对焦效率,进而提高全准焦图像合成效率。The image processing method in this embodiment obtains a target brightness map according to the brightness values of the pixel points included in each pixel point group in the image sensor. After the target brightness map is obtained, the target brightness map is segmented and processed according to As a result of the segmentation processing, the first segmented brightness map and the second segmented brightness map are obtained. Then, based on the position difference of the matching pixels in the first segmented brightness map and the second segmented brightness map, the pixels that match each other are determined Then, according to the phase difference of the matched pixels, the phase difference corresponding to each of the at least two sub-regions is determined. In this way, the brightness value of the pixel points included in each pixel point group in the image sensor can be used to determine the phase difference. Obtain the target phase difference map. Therefore, compared with the method of obtaining phase difference by using sparsely arranged phase detection pixels, the phase difference of the matched pixels in the embodiment of the present application contains relatively rich phase difference information. The acquired phase difference accuracy can be improved, so that when focusing, a high-precision phase difference corresponding to the focus area can be obtained, the focus peak can not be found, the focus efficiency is improved, and the overall in-focus image synthesis efficiency is improved.
在一个实施例中,对目标亮度图进行切分处理,根据切分处理的结果得到第一切分亮度图和第二切分亮度图,包括:In one embodiment, segmentation processing is performed on the target brightness map, and the first segmented brightness map and the second segmented brightness map are obtained according to the results of the segmentation processing, including:
操作(b1),对目标亮度图进行切分处理,得到多个亮度图区域,每个亮度图区域包括目标亮度图中的一行像素,或者,每个亮度图区域包括目标亮度图中的一列像素。Operation (b1), segmenting the target brightness map to obtain multiple brightness map regions, each brightness map region includes a row of pixels in the target brightness map, or each brightness map region includes a column of pixels in the target brightness map .
其中,每个亮度图区域包括目标亮度图中的一行像素。或者,每个亮度图区域包括目标亮度图中的一列像素。Wherein, each luminance map area includes a row of pixels in the target luminance map. Alternatively, each luminance map area includes a column of pixels in the target luminance map.
本实施例中,电子设备可以沿行的方向对目标亮度图进行逐列切分,得到目标亮度图的多个像素列。In this embodiment, the electronic device may segment the target brightness map column by column along the row direction to obtain multiple pixel columns of the target brightness map.
本实施例中,电子设备可以沿列的方向对目标亮度图进行逐行切分,得到目标亮度图的多个像素行。In this embodiment, the electronic device may segment the target brightness map row by row along the column direction to obtain multiple pixel rows of the target brightness map.
操作(b2),从多个亮度图区域中获取多个第一亮度图区域和多个第二亮度图区域,第一亮度图区域包括目标亮度图中偶数行的像素,或者,第一亮度图区域包括目标亮度图中偶数列的像素,第二亮度图区域包括目标亮度图中奇数行的像素,或者,第二亮度图区域包括目标亮度图中奇数列的像素。Operation (b2), obtain a plurality of first brightness map regions and a plurality of second brightness map regions from a plurality of brightness map regions, where the first brightness map region includes pixels in even rows of the target brightness map, or the first brightness map The area includes pixels in even-numbered columns in the target luminance map, and the second luminance map area includes pixels in odd-numbered rows in the target luminance map, or the second luminance map area includes pixels in odd-numbered columns in the target luminance map.
其中,第一亮度图区域包括目标亮度图中偶数行的像素。或者,第一亮度图区域包括目标亮度图中偶数列的像素。Wherein, the first luminance map area includes pixels in even rows of the target luminance map. Alternatively, the first luminance map area includes pixels in even-numbered columns in the target luminance map.
第二亮度图区域包括目标亮度图中奇数行的像素。或者,第二亮度图区域包括目标亮度图中奇数列的像素。The second luminance map area includes pixels in odd rows in the target luminance map. Or, the second luminance map area includes pixels in odd-numbered columns in the target luminance map.
本实施例中,在对目标亮度图进行逐列切分的情况下,电子设备可以将偶数列确定为第一亮度图区域,将奇数列确定为第二亮度图区域。In this embodiment, when the target brightness map is segmented column by column, the electronic device may determine the even-numbered columns as the first brightness map area, and the odd-numbered columns as the second brightness map area.
本实施例中,在对目标亮度图进行逐行切分的情况下,电子设备可以将偶数行确定为第一亮度图区域,将奇数行确定为第二亮度图区域。In this embodiment, when the target brightness map is segmented row by row, the electronic device may determine even-numbered rows as the first brightness map area, and odd-numbered rows as the second brightness map area.
操作(b3),利用多个第一亮度图区域组成第一切分亮度图,利用多个第二亮度图区域组成第二切分亮度图。Operation (b3) is to use a plurality of first brightness map regions to form a first segmented brightness map, and use a plurality of second brightness map regions to form a second segmented brightness map.
本实施例中,图11为一个实施例中以第一方向对目标亮度图进行切分处理的示意图。图12为一个实施例中以第二方向对目标亮度图进行切分处理的示意图。如图11所示,假设目标亮度图 包括6行6列像素,则在对目标亮度图进行逐列切分的情况下,即以第一方向对目标亮度图进行切分处理。电子设备可以将目标亮度图的第1列像素、第3列像素和第5列像素确定为第一亮度图区域,可以将目标亮度图的第2列像素、第4列像素和第6列像素确定为第二亮度图区域。而后,电子设备可以将第一亮度图区域进行拼接,得到第一切分亮度图T1,该第一切分亮度图T1包括目标亮度图的第1列像素、第3列像素和第5列像素。电子设备可以将第二亮度图区域进行拼接,得到第二切分亮度图T2,该第二切分亮度图T2包括目标亮度图的第2列像素、第4列像素和第6列像素。In this embodiment, FIG. 11 is a schematic diagram of performing segmentation processing on the target brightness map in the first direction in an embodiment. FIG. 12 is a schematic diagram of performing segmentation processing on the target brightness map in the second direction in an embodiment. As shown in FIG. 11, assuming that the target brightness map includes 6 rows and 6 columns of pixels, when the target brightness map is segmented column by column, that is, the target brightness map is segmented in the first direction. The electronic device can determine the pixels in the first column, the pixels in the third column, and the pixels in the fifth column of the target brightness map as the first brightness map area, and can set the pixels in the second column, the fourth column and the sixth column of the target brightness map. Determined as the second brightness map area. Then, the electronic device may splice the first brightness map area to obtain a first split brightness map T1, which includes the first column, the third column, and the fifth column of the target brightness map. . The electronic device may splice the second brightness map regions to obtain a second segmented brightness map T2, which includes the second column of pixels, the fourth column of pixels, and the sixth column of pixels of the target brightness map.
如图12所示,假设目标亮度图包括6行6列像素,则在对目标亮度图进行逐行切分的情况下,即以第二方向对目标亮度图进行切分处理。电子设备可以将目标亮度图的第1行像素、第3行像素和第5行像素确定为第一亮度图区域,可以将目标亮度图的第2行像素、第4行像素和第6行像素确定为第二亮度图区域,而后,电子设备可以将第一亮度图区域进行拼接,得到第一切分亮度图T3,该第一切分亮度图T3包括目标亮度图的第1行像素、第3行像素和第5行像素。电子设备可以将第二亮度图区域进行拼接,得到第二切分亮度图T4,该第二切分亮度图T4包括目标亮度图的第2行像素、第4行像素和第6行像素。As shown in FIG. 12, assuming that the target brightness map includes 6 rows and 6 columns of pixels, when the target brightness map is segmented row by row, that is, the target brightness map is segmented in the second direction. The electronic device can determine the pixels in the first row, the pixels in the third row, and the pixels in the fifth row of the target brightness map as the first brightness map area, and can set the pixels in the second row, the fourth row and the sixth row of the target brightness map. Determined as the second brightness map area, and then, the electronic device can splice the first brightness map area to obtain the first sub-brightness map T3. The first sub-brightness map T3 includes the pixels in the first row and the first row of the target brightness map. 3 rows of pixels and 5th row of pixels. The electronic device may splice the second brightness map regions to obtain a second segmented brightness map T4, which includes the second row of pixels, the fourth row of pixels, and the sixth row of pixels of the target brightness map.
本申请实施例中的图像处理方法,不需要遮挡像素点来获取相位差,通过亮度切分的方式得到相对丰富的相位差信息,提高获取到的相位差精确度。The image processing method in the embodiment of the present application does not need to block pixels to obtain the phase difference, and obtains relatively rich phase difference information by means of brightness segmentation, which improves the accuracy of the obtained phase difference.
在一个实施例中,电子设备包括图像传感器,图像传感器包括阵列排布的多个像素点组,每个所述像素点组包括阵列排布的M*N个像素点;每个像素点对应一个感光单元,其中,M和N均为大于或等于2的自然数。每个子区域对应的相位差包括水平相位差和垂直相位差。获取至少两个子区域中每个子区域对应的相位差,包括:当检测到子区域中包含水平线条时,将垂直相位差作为子区域对应的相位差;当检测到子区域中不包含水平线条时,将水平相位差作为子区域对应的相位差。In an embodiment, the electronic device includes an image sensor, the image sensor includes a plurality of pixel point groups arranged in an array, each of the pixel point groups includes M*N pixel points arranged in an array; each pixel point corresponds to one In the photosensitive unit, both M and N are natural numbers greater than or equal to 2. The phase difference corresponding to each sub-region includes a horizontal phase difference and a vertical phase difference. Obtain the phase difference corresponding to each sub-region in at least two sub-regions, including: when the sub-region is detected to contain horizontal lines, the vertical phase difference is regarded as the phase difference corresponding to the sub-region; when the sub-region does not contain horizontal lines , Regard the horizontal phase difference as the phase difference corresponding to the sub-region.
具体地,由于摄像头的一些特性,导致成像时候会存在相差。即线条可能存在拖影等问题。那么当检测到子区域中包含水平线条时,采用垂直相位差作为子区域对应的相位差;当检测到子区域中包含垂直线条时,采用水平相位差作为子区域对应的相位差,能够提高相位差获取精度,从而提高图像清晰度。Specifically, due to some characteristics of the camera, there will be a difference in imaging. That is, the lines may have problems such as smearing. Then when it is detected that the sub-region contains horizontal lines, the vertical phase difference is used as the phase difference corresponding to the sub-region; when it is detected that the sub-region contains vertical lines, the horizontal phase difference is used as the phase difference corresponding to the sub-region, which can improve the phase Poor acquisition accuracy, thereby improving image clarity.
在一个实施例中,根据每个目标相位差进行对焦,得到每个目标相位差对应的图像,包括:将每个目标相位差对应的子区域作为对焦区域,得到每个目标相位差对应的图像。In one embodiment, focusing according to the phase difference of each target to obtain an image corresponding to the phase difference of each target includes: taking the sub-area corresponding to the phase difference of each target as the focus area to obtain the image corresponding to the phase difference of each target .
具体地,电子设备将至少两个目标相位差中每个目标相位差对应的子区域作为对焦区域,得到每个目标相位差对应的图像。例如,至少两个目标相位差中包括目标相位差A、目标相位差B和目标相位差C。那么将目标相位差A对应的子区域作为对焦区域,进行对焦,获取目标相位差A对应的图像。将目标相位差B对应的子区域作为对焦区域,进行对焦,获取目标相位差B对应的图像。将目标相位差C对应的子区域作为对焦区域,进行对焦,获取目标相位差C对应的图像。即一共得到三张图像。Specifically, the electronic device uses a sub-area corresponding to each target phase difference of the at least two target phase differences as a focus area, and obtains an image corresponding to each target phase difference. For example, at least two target phase differences include target phase difference A, target phase difference B, and target phase difference C. Then, the sub-area corresponding to the target phase difference A is used as the focus area, and focus is performed to obtain an image corresponding to the target phase difference A. The sub-area corresponding to the target phase difference B is used as the focus area, and focus is performed to obtain an image corresponding to the target phase difference B. The sub-region corresponding to the target phase difference C is used as the focus area, and focus is performed to obtain an image corresponding to the target phase difference C. That is, a total of three images are obtained.
本实施例中的图像处理方法,通过将每个目标相位差对应的子区域作为对焦区域,得到每个目标相位差对应的图像,能够获取到不同焦点的图像进行合成,提高图像的清晰度。The image processing method in this embodiment uses the sub-region corresponding to each target phase difference as a focus area to obtain an image corresponding to each target phase difference, and can obtain images with different focal points for synthesis, thereby improving image clarity.
在一个实施例中,根据每个目标相位差对应的图像进行合成,得到全准焦图像,包括:将每个目标相位差对应的图像划分为相同数量的子图像区域;获取每个子图像区域对应的清晰度;根据每个子图像区域对应的清晰度,确定相互匹配的子图像区域中清晰度最高的子图像区域;将清晰度最高的子图像区域进行拼接合成,得到全准焦图像。In one embodiment, synthesizing the images corresponding to each target phase difference to obtain a fully in-focus image includes: dividing the image corresponding to each target phase difference into the same number of sub-image areas; obtaining the corresponding sub-image area According to the corresponding definition of each sub-image area, determine the sub-image area with the highest definition among the matching sub-image areas; stitch and combine the sub-image areas with the highest definition to obtain a fully in-focus image.
其中,相互匹配的子图像区域是指位于不同图像中相同位置的子图像区域。Among them, the sub-image areas that match each other refer to sub-image areas located at the same position in different images.
具体地,电子设备将每个目标相位差对应的图像划分为相同数量的子图像区域。电子设备获取每个目标相位差对应的图像中每个子图像区域对应的清晰度。电子设备根据每个子图像每个子图像区域对应的清晰度,确定相匹配的子图像区域中清晰度最高的子图像区域。电子设备将所有清晰度最高的子图像区域进行合成,得到全准焦图像。例如,目标相位差A对应图像A,图像A划分为子图像区域1、子图像区域2、子图像区域3和子图像区域4。目标相位差B对应图像B,图像B划分 为子图像区域α、子图像区域β、子图像区域γ和子图像区域δ。其中,子图像区域1位于图像A左上角,子图像区域α位于图像B左上角,则子图像区域1与子图像区域α相匹配…等等。若清晰度最高的子图像区域1、子图像区域β、子图像区域γ和子图像区域4,那么电子设备将子图像区域1、子图像区域β、子图像区域γ和子图像区域4进行拼接合成,得到全准焦图像。Specifically, the electronic device divides the image corresponding to each target phase difference into the same number of sub-image areas. The electronic device obtains the definition corresponding to each sub-image area in the image corresponding to each target phase difference. The electronic device determines the sub-image area with the highest definition among the matched sub-image areas according to the corresponding definition of each sub-image area of each sub-image. The electronic device synthesizes all the sub-image areas with the highest definition to obtain a fully in-focus image. For example, the target phase difference A corresponds to image A, and image A is divided into sub-image area 1, sub-image area 2, sub-image area 3, and sub-image area 4. The target phase difference B corresponds to image B, which is divided into sub-image area α, sub-image area β, sub-image area γ, and sub-image area δ. Among them, the sub-image area 1 is located at the upper left corner of the image A, and the sub-image area α is located at the upper left corner of the image B, then the sub-image area 1 matches the sub-image area α... and so on. If the sub-image area 1, sub-image area β, sub-image area γ, and sub-image area 4 with the highest definition, the electronic device will splice and synthesize sub-image area 1, sub-image area β, sub-image area γ, and sub-image area 4. Obtain a fully in-focus image.
本申请实施例中的图像处理方法,将每个目标相位差对应的图像划分为相同数量的子图像区域;获取每个子图像区域对应的清晰度;根据每个子图像区域对应的清晰度,确定相互匹配的子图像区域中清晰度最高的子图像区域;将清晰度最高的子图像区域进行合成,得到全准焦图像,能够快速得到全准焦图像,提高图像处理效率。In the image processing method in the embodiment of the present application, the image corresponding to each target phase difference is divided into the same number of sub-image areas; the definition corresponding to each sub-image area is obtained; and the mutual definition is determined according to the corresponding definition of each sub-image area. The sub-image area with the highest definition among the matched sub-image areas; the sub-image area with the highest definition is synthesized to obtain a full in-focus image, which can quickly obtain a full-in-focus image and improve image processing efficiency.
在一个实施例中,如图13所示,为一个实施例中合成得到全准焦图的流程示意图。根据每个目标相位差对应的图像进行合成,得到全准焦图像,包括:In an embodiment, as shown in FIG. 13, it is a schematic flow chart of synthesizing to obtain a full in-focus image in an embodiment. Synthesize the images corresponding to the phase difference of each target to obtain a fully in-focus image, including:
操作1302,对每个目标相位差对应的图像卷积和采样处理,当满足预设迭代条件时,得到每个目标相位差对应的图像的高斯金字塔。 Operation 1302, convolution and sampling processing of the image corresponding to each target phase difference, and when the preset iterative condition is met, a Gaussian pyramid of the image corresponding to each target phase difference is obtained.
其中,高斯金字塔是一种图像金字塔,金字塔中除了最底层图像,其他层图像均是对前一层图像进行卷积和采样得到的。高斯金字塔可用于得到低频图像。其中低频图像可以是指图像中的轮廓图像。迭代条件可以是指达到预设次数或达到预设时间等不限于此。每个目标相位差对应的图像均有一个对应的高斯金字塔。例如目标相位差对应的图像A对应高斯金字塔A,目标相位差对应的图像B对应高斯金字塔B。Among them, the Gaussian pyramid is a kind of image pyramid. Except for the bottom layer image, the other layer images in the pyramid are all obtained by convolving and sampling the previous layer image. Gaussian pyramids can be used to obtain low-frequency images. The low-frequency image can refer to the contour image in the image. The iteration condition may mean reaching a preset number of times or reaching a preset time, etc., and is not limited thereto. The image corresponding to each target phase difference has a corresponding Gaussian pyramid. For example, the image A corresponding to the target phase difference corresponds to the Gaussian pyramid A, and the image B corresponding to the target phase difference corresponds to the Gaussian pyramid B.
具体地,电子设备利用高斯核对每个目标相位差对应的图像进行卷积,对卷积后的图像进行采样,得到每一层图像。即,将每个目标相位差对应的图像(设为G0)进行卷积和采样,得到上一层低频图像(G1);再对图像G1进行卷积和采样,得到图像G2…直到满足预设迭代条件,例如迭代5次,得到图像G5,得到每个目标相位差对应的包含多个低频图像的高斯金字塔。Specifically, the electronic device uses a Gaussian kernel to convolve the image corresponding to each target phase difference, and sample the convolved image to obtain each layer of image. That is, the image (set as G0) corresponding to each target phase difference is convolved and sampled to obtain the upper low-frequency image (G1); then image G1 is convolved and sampled to obtain image G2...until the preset is satisfied The iterative condition, for example, iterates 5 times to obtain image G5, and obtains a Gaussian pyramid containing multiple low-frequency images corresponding to each target phase difference.
操作1304,根据每个目标相位差对应的图像的高斯金字塔中每一层图像进行处理,得到每个目标相位差对应的图像的拉普拉斯金字塔。In operation 1304, processing is performed according to each layer of the image in the Gaussian pyramid of the image corresponding to each target phase difference to obtain the Laplacian pyramid of the image corresponding to each target phase difference.
其中,在高斯金字塔的运算过程中,图像经过卷积和下采样操作会丢失部分高频细节信息。为描述这些高频信息,定义了拉普拉斯金字塔(Laplacian Pyramid,LP)。每个目标相位差对应的图像均有一个对应的拉普拉斯金字塔。例如目标相位差对应的图像A对应拉普拉斯金字塔A,目标相位差对应的图像B对应拉普拉斯金字塔B。拉普拉斯金字塔的每一层代表不同的尺度和细节。其中,细节可视为频率。Among them, in the calculation process of the Gaussian pyramid, some high-frequency detail information will be lost after the image undergoes convolution and down-sampling operations. To describe these high-frequency information, the Laplacian Pyramid (LP) is defined. The image corresponding to each target phase difference has a corresponding Laplacian pyramid. For example, the image A corresponding to the target phase difference corresponds to the Laplacian Pyramid A, and the image B corresponding to the target phase difference corresponds to the Laplacian Pyramid B. Each layer of the Laplace Pyramid represents different scales and details. Among them, the details can be regarded as frequency.
具体地,电子设备采用将原图减去上采样后的低频图像,得到高频图像。公式可为L0=I0-G1,其中L0为拉普拉斯金字塔的最底层。由此可得到L1、L2、L3、L4…则得到每个目标相位差对应的图像的拉普拉斯金字塔。Specifically, the electronic device obtains the high-frequency image by subtracting the up-sampled low-frequency image from the original image. The formula can be L0=I0-G1, where L0 is the bottom layer of the Laplace pyramid. Thus, L1, L2, L3, L4... and then the Laplacian pyramid of the image corresponding to each target phase difference can be obtained.
操作1306,将每个目标相位差对应的图像的拉普拉斯金字塔进行融合,得到融合后的拉普拉斯金字塔。In operation 1306, the Laplacian pyramid of the image corresponding to the phase difference of each target is fused to obtain a fused Laplacian pyramid.
其中,融合后的拉普拉斯金字塔仅有一个。Among them, there is only one Laplace Pyramid after fusion.
具体地,电子设备获取每个目标相位差对应的图像的权重,根据每个目标相位差对应的图像的权重以及每个目标相位差对应的图像的拉普拉斯金字塔进行融合,得到融合后的拉普拉斯金字塔。Specifically, the electronic device obtains the weight of the image corresponding to each target phase difference, and performs fusion according to the weight of the image corresponding to each target phase difference and the Laplacian pyramid of the image corresponding to each target phase difference, to obtain the fused image Pyramid of Laplace.
例如,融合公式如下:For example, the fusion formula is as follows:
L5(融合)=Weight(图1)*L5(图1)+Weight(图2)*L5(图2)L5(fusion)=Weight(picture 1)*L5(picture 1)+Weight(picture 2)*L5(picture 2)
其中,L5(融合)是指融合后的拉普拉斯金字塔从底层往上数的第六层图像。Weight(图1)是指图1的权重。Weight(图2)是指图2的权重。L5(图1)是指图1的拉普拉斯金字塔从底层往上数的第六层图像。L5(图2)是指是指图2的拉普拉斯金字塔从底层往上数的第六层图像。Among them, L5 (fusion) refers to the sixth layer of the fused Laplacian pyramid from the bottom to the top. Weight (Figure 1) refers to the weight of Figure 1. Weight (Figure 2) refers to the weight of Figure 2. L5 (Figure 1) refers to the sixth layer of the Laplacian Pyramid in Figure 1 from the bottom up. L5 (Figure 2) refers to the sixth layer of the Laplacian Pyramid in Figure 2 counted from the bottom up.
其中,每张图像的权重可以根据景深、模糊程度等参数调节。例如,模糊程度低的区域权重大。模糊程度高的区域权重小。景深小的区域权重大。景深大的区域权重小。Among them, the weight of each image can be adjusted according to parameters such as depth of field and degree of blur. For example, regions with a low degree of ambiguity are highly weighted. Areas with a high degree of blur have a small weight. Areas with a small depth of field are more powerful. The area with a large depth of field has a small weight.
操作1308,根据融合后的拉普拉斯金字塔进行重建处理,得到全准焦图像。In operation 1308, reconstruction processing is performed according to the fused Laplacian pyramid to obtain a fully in-focus image.
具体地,电子设备从顶层往底层融合。电子设备可将融合后的拉普拉斯金字塔与高斯金字塔中 的顶层图像进行处理,即进行重建,得到全准焦图像。例如,R5(fusion)=L5(fusion)+G5。Specifically, electronic devices merge from the top to the bottom. The electronic device can process the top-level image in the fused Laplacian pyramid and Gaussian pyramid, that is, reconstruct it to obtain a fully in-focus image. For example, R5(fusion)=L5(fusion)+G5.
其中,G5可以通过每个目标相位差对应的高斯金字塔的G5层图像进行融合(fuse)后得到。L5(fusion)为融合后的拉普拉斯金字塔的L5层。R5(fusion)为重建(Reconstruction)后的拉普拉斯金字塔的G5层。Among them, G5 can be obtained by fuse of the G5 layer image of the Gaussian pyramid corresponding to each target phase difference. L5 (fusion) is the L5 layer of the Laplace pyramid after fusion. R5 (fusion) is the G5 layer of the Laplace Pyramid after reconstruction (Reconstruction).
再将R5(fusion)上采样,那么R4=R5(上采样)+L4(fusion)。Then R5 (fusion) is up-sampled, then R4 = R5 (up-sampling) + L4 (fusion).
其中,L4(fusion)为融合后的拉普拉斯金字塔的L4层。以此类推,可以得到最后的合成结果R0,即为全准焦图。Among them, L4 (fusion) is the L4 layer of the Laplace pyramid after fusion. By analogy, the final synthesis result R0 can be obtained, which is the full in-focus image.
本实施例中的图像处理方法,对每个目标相位差对应的图像卷积和采样处理,当满足预设迭代条件时,得到每个目标相位差对应的图像的高斯金字塔,根据每个目标相位差对应的图像的高斯金字塔中每一层图像进行处理,得到每个目标相位差对应的图像的拉普拉斯金字塔,将每个目标相位差对应的图像的拉普拉斯金字塔进行融合,得到融合后的拉普拉斯金字塔,根据融合后的拉普拉斯金字塔进行重建处理,得到全准焦图像,能根据低频的轮廓和高频细节部分进行图像合成,使各个区域之间的边界更加自然,提高图像的真实性以及清晰度。In the image processing method in this embodiment, the image convolution and sampling processing for each target phase difference is processed, and when the preset iterative conditions are met, the Gaussian pyramid of the image corresponding to each target phase difference is obtained, and the Gaussian pyramid is obtained according to each target phase difference. Each layer of the image in the Gaussian pyramid of the image corresponding to the difference is processed to obtain the Laplacian pyramid of the image corresponding to each target phase difference, and the Laplacian pyramid of the image corresponding to each target phase difference is merged to obtain The fused Laplacian Pyramid is reconstructed according to the fused Laplacian Pyramid to obtain a fully in-focus image. The image can be synthesized according to the low-frequency contour and high-frequency details to make the boundary between each area more Naturally, improve the authenticity and clarity of the image.
在一个实施例中,如图14所示,为另一个实施例中合成得到全准焦图的流程示意图。根据每个目标相位差对应的图像进行合成,得到全准焦图像,包括:In one embodiment, as shown in FIG. 14, it is a schematic flow chart of synthesizing to obtain a full in-focus image in another embodiment. Synthesize the images corresponding to the phase difference of each target to obtain a fully in-focus image, including:
操作1402,提取每个目标相位差对应的图像的特征。In operation 1402, the feature of the image corresponding to each target phase difference is extracted.
具体地,图15为又一个实施例中合成得到全准焦图的流程示意图。电子设备采用卷积神经网络对每个目标相位差对应的图像进行卷积处理,并进行特征提取。例如,图15中对图像1进行卷积→特征提取→卷积。对图像2进行卷积→特征提取→卷积。Specifically, FIG. 15 is a schematic flowchart of synthesizing to obtain a full in-focus image in another embodiment. The electronic device uses a convolutional neural network to perform convolution processing on the image corresponding to each target phase difference, and perform feature extraction. For example, in FIG. 15, convolution→feature extraction→convolution is performed on image 1. Perform convolution→feature extraction→convolution on image 2.
操作1404,将每个目标相位差对应的图像的特征融合,得到第一图像特征。In operation 1404, the features of the image corresponding to each target phase difference are fused to obtain the first image feature.
具体地,电子设备将每个目标相位差对应的图像的特征进行融合,并通过激活函数计算,得到第一图像特征。Specifically, the electronic device fuses the features of the image corresponding to each target phase difference, and calculates the activation function to obtain the first image feature.
操作1406,对每个目标相位差对应的图像进行平均处理,得到平均图像。In operation 1406, the image corresponding to each target phase difference is averaged to obtain an average image.
具体地,电子设备对每个目标相位差对应的图像的亮度值进行平均处理,得到平均图像。Specifically, the electronic device performs averaging processing on the brightness value of the image corresponding to each target phase difference to obtain an average image.
操作1408,根据平均图像以及第一图像特征进行特征提取,得到第二图像特征。Operation 1408: Perform feature extraction according to the average image and the first image feature to obtain the second image feature.
具体地,电子设备根据平均图像以及第一图像特征进行特征提取,得到第二图像特征。Specifically, the electronic device performs feature extraction according to the average image and the first image feature to obtain the second image feature.
操作1410,根据第二图像特征以及平均图像进行特征重建,得到全准焦图像。Operation 1410: Perform feature reconstruction according to the second image feature and the average image to obtain a fully in-focus image.
具体地,电子设备根据第二图像特征以及平均图像进行特征重建,得到全准焦图像。Specifically, the electronic device performs feature reconstruction according to the second image feature and the average image to obtain a fully in-focus image.
本申请实施例中的图像处理方法,提取每个目标相位差对应的图像的特征,将每个目标相位差对应的图像的特征融合,得到第一图像特征,对每个目标相位差对应的图像进行平均处理,得到平均图像,根据平均图像以及第一图像特征进行特征提取,得到第二图像特征,根据第二图像特征以及平均图像进行特征重建,得到全准焦图像,能够采用神经网络的方式对图像进行合成,提高图像合成的准确性以及清晰度。The image processing method in the embodiment of the present application extracts the features of the image corresponding to each target phase difference, fuses the features of the image corresponding to each target phase difference, and obtains the first image feature. For the image corresponding to each target phase difference Perform averaging processing to obtain an average image, perform feature extraction based on the average image and the first image feature, obtain the second image feature, perform feature reconstruction based on the second image feature and the average image, and obtain a fully in-focus image. The neural network can be used. Synthesize images to improve the accuracy and clarity of image synthesis.
在一个实施例中,获取预览图像,包括:获取预览图像中的感兴趣区域;将预览图像划分为至少两个子区域,包括:将预览图像中的感兴趣区域划分为至少两个子区域。In one embodiment, acquiring the preview image includes: acquiring the region of interest in the preview image; and dividing the preview image into at least two subregions includes: dividing the region of interest in the preview image into at least two subregions.
其中,感兴趣区域(region of interest,ROI)是指在图像处理中,从被处理的图像以方框、圆、椭圆、不规则多边形等方式勾勒出需要处理的区域。感兴趣区域中可以包含背景以及物体。电子设备接收到在第一预览图像的触发指令,根据触发指令获取用户选择的感兴趣区域。Among them, the region of interest (ROI) refers to that in image processing, the area to be processed is outlined in the form of boxes, circles, ellipses, irregular polygons, etc., from the processed image. The area of interest can include background and objects. The electronic device receives the trigger instruction on the first preview image, and obtains the region of interest selected by the user according to the trigger instruction.
具体地,电子设备将感兴趣区域划分为至少两个子区域。电子设备可以将用户选择的感兴趣区域划分为N×N个子区域。或者,电子设备可以将用户选择的感兴趣区域划分为N×M个子区域等不限于此。N和M均为正整数。Specifically, the electronic device divides the region of interest into at least two sub-regions. The electronic device may divide the region of interest selected by the user into N×N sub-regions. Alternatively, the electronic device may divide the region of interest selected by the user into N×M sub-regions, etc., which is not limited thereto. Both N and M are positive integers.
本申请实施例中的图像处理方法,获取预览图像中的感兴趣区域,将预览图像中的感兴趣区域划分为至少两个子区域,能够根据感兴趣区域进行对焦,从而保证感兴趣区域中的景物清晰,提高全准焦图像中感兴趣区域的图像清晰度。The image processing method in the embodiment of the present application obtains the region of interest in the preview image, divides the region of interest in the preview image into at least two sub-regions, and can focus according to the region of interest, thereby ensuring the scene in the region of interest Clear, improve the image clarity of the region of interest in the full focus image.
在一个实施例中,根据每个子区域对应的相位差确定至少两个目标相位差,包括:获取场景模 式;根据场景模式确定至少两个目标相位差。In one embodiment, determining the at least two target phase differences according to the phase difference corresponding to each sub-region includes: acquiring a scene mode; and determining the at least two target phase differences according to the scene mode.
具体地,每种场景模式可对应不同类型的目标相位差。场景模式可以是夜景模式、全景模式等不限于此。例如,A场景模式对应的目标相位差为前景相位差和背景相位差两个。B场景模式对应的目标相位差为前景相位差、中位数相位差和后景相位差等不限于此。Specifically, each scene mode can correspond to different types of target phase differences. The scene mode may be a night scene mode, a panoramic mode, etc., and is not limited thereto. For example, the target phase difference corresponding to the A scene mode is a foreground phase difference and a background phase difference. The target phase difference corresponding to the B scene mode is the foreground phase difference, the median phase difference, and the background phase difference, etc., which are not limited to this.
本申请实施例中的图像处理方法,获取场景模式,根据场景模式确定至少两个目标相位差,能够根据不同场景模式快速确定目标相位差,达到不同场景对应的效果,提高图像处理效率以及图像效果的清晰度。The image processing method in the embodiment of the application obtains the scene mode, determines at least two target phase differences according to the scene mode, can quickly determine the target phase difference according to different scene modes, achieves the effect corresponding to different scenes, and improves image processing efficiency and image effects The clarity.
应该理解的是,虽然图2、10、13和14的流程图中的各个操作按照箭头的指示依次显示,但是这些操作并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些操作的执行并没有严格的顺序限制,这些操作可以以其它的顺序执行。而且,图2、10、13和14中的至少一部分操作可以包括多个子操作或者多个阶段,这些子操作或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,这些子操作或者阶段的执行顺序也不必然是依次进行,而是可以与其它操作或者其它操作的子操作或者阶段的至少一部分轮流或者交替地执行。It should be understood that although the various operations in the flowcharts of FIGS. 2, 10, 13 and 14 are displayed in sequence as indicated by the arrows, these operations are not necessarily performed in sequence in the order indicated by the arrows. Unless there is a clear description in this article, there is no strict order restriction on the execution of these operations, and these operations can be executed in other orders. Moreover, at least part of the operations in Figures 2, 10, 13 and 14 may include multiple sub-operations or multiple stages. These sub-operations or stages are not necessarily executed at the same time, but can be executed at different times. The execution order of the sub-operations or stages is not necessarily performed sequentially, but may be executed alternately or alternately with other operations or at least a part of the sub-operations or stages of other operations.
图16为一个实施例的图像处理装置的结构框图。如图16所示,一种图像处理装置,包括预览图像获取模块1602、划分模块1604、相位差获取模块1606、对焦模块1608和合成模块1610,其中:Fig. 16 is a structural block diagram of an image processing apparatus according to an embodiment. As shown in FIG. 16, an image processing device includes a preview image acquisition module 1602, a division module 1604, a phase difference acquisition module 1606, a focus module 1608, and a synthesis module 1610, in which:
预览图像获取模块1602,用于获取预览图像。The preview image acquisition module 1602 is used to acquire a preview image.
划分模块1604,用于将预览图像划分为至少两个子区域。The dividing module 1604 is configured to divide the preview image into at least two sub-areas.
相位差获取模块1606,用于获取至少两个子区域中每个子区域对应的相位差。The phase difference acquiring module 1606 is configured to acquire the phase difference corresponding to each of the at least two sub-regions.
相位差获取模块1606,还用于从每个子区域对应的相位差中确定至少两个目标相位差,至少两个目标相位差中包括目标前景相位差和目标背景相位差。The phase difference acquisition module 1606 is further configured to determine at least two target phase differences from the phase differences corresponding to each sub-region, and the at least two target phase differences include the target foreground phase difference and the target background phase difference.
对焦模块1608,用于根据每个目标相位差进行对焦,得到每个目标相位差对应的图像。The focusing module 1608 is used for focusing according to the phase difference of each target to obtain an image corresponding to the phase difference of each target.
合成模块1610,用于根据每个目标相位差对应的图像进行合成,得到全准焦图像。The synthesizing module 1610 is used to synthesize the images corresponding to each target phase difference to obtain a fully in-focus image.
本实施例中的图像处理装置,获取预览图像,将预览图像划分为至少两个子区域,获取至少两个子区域中每个子区域对应的相位差,从每个子区域对应的相位差中确定至少两个目标相位差,至少两个目标相位差中包括目标前景相位差和目标背景相位差,根据每个目标相位差进行对焦,得到每个目标相位差对应的图像,能够获取至少两张处于不同焦点下的图像,其中一张是背景准焦图像,一张是前景准焦图像,根据每个目标相位差对应的图像进行合成,得到全准焦图像,能够得到失焦区域较少的图像,提高图像的清晰度。The image processing device in this embodiment obtains a preview image, divides the preview image into at least two sub-regions, obtains the phase difference corresponding to each sub-region in the at least two sub-regions, and determines at least two phase differences from the phase difference corresponding to each sub-region. Target phase difference, at least two target phase differences include the target foreground phase difference and the target background phase difference. Focusing is performed according to the phase difference of each target to obtain an image corresponding to the phase difference of each target, and at least two targets at different focal points can be acquired. One is the background in-focus image, and the other is the foreground in-focus image. The images corresponding to the phase difference of each target are synthesized to obtain a fully in-focus image, which can obtain an image with less out-of-focus area and improve the image The clarity.
在一个实施例中,相位差获取模块1606用于将至少两个子区域中每个子区域对应的相位差划分为前景相位差集合和背景相位差集合;获取前景相位差集合对应的第一相位差均值;获取背景相位差集合对应的第二相位差均值;将第一相位差均值作为目标前景相位差;将第二相位差均值作为目标背景相位差。In one embodiment, the phase difference obtaining module 1606 is configured to divide the phase difference corresponding to each of the at least two sub-regions into a foreground phase difference set and a background phase difference set; and obtain the first phase difference mean value corresponding to the foreground phase difference set ; Obtain the second average phase difference corresponding to the background phase difference set; use the first average phase difference as the target foreground phase difference; use the second average phase difference as the target background phase difference.
本实施例中的图像处理装置,将至少两个子区域中每个子区域对应的相位差划分为前景相位差集合和后景背景相位差集合,获取前景相位差集合对应的第一相位差均值,获取后景背景相位差集合对应的第二相位差均值,将第一相位差均值作为目标前景相位差,将第二相位差均值作为目标背景相位差,能够根据均值获取前景准焦图像和背景准焦图像,提高图像的清晰度。The image processing device in this embodiment divides the phase difference corresponding to each of the at least two sub-regions into a foreground phase difference set and a background background phase difference set, and obtains the first phase difference mean value corresponding to the foreground phase difference set, and obtains The second average value of phase difference corresponding to the background phase difference set of the background, the first average value of phase difference is taken as the target foreground phase difference, and the second average value of phase difference is taken as the target background phase difference, the foreground in-focus image and background in-focus image can be obtained according to the average value Image, improve the clarity of the image.
在一个实施例中,相位差获取模块1606用于排除子区域对应的相位差中的最大相位差,得到剩余相位差集合;将剩余相位差集合划分为前景相位差集合和后景背景相位差集合。In one embodiment, the phase difference acquisition module 1606 is used to exclude the maximum phase difference in the phase difference corresponding to the sub-region to obtain the remaining phase difference set; divide the remaining phase difference set into a foreground phase difference set and a background background phase difference set .
本实施例中的图像处理装置,由于最远的背景细节往往不重要,排除子区域对应的相位差中的最大相位差,得到剩余相位差集合,能够排除掉最远的背景,将剩余相位差集合划分为前景相位差集合和后景背景相位差集合,根据均值进行对焦,能够提高图像的清晰度。In the image processing device in this embodiment, since the furthest background details are often not important, the largest phase difference in the phase difference corresponding to the sub-region is excluded to obtain the remaining phase difference set, which can eliminate the farthest background and reduce the remaining phase difference. The set is divided into a foreground phase difference set and a background background phase difference set. Focusing based on the average value can improve the clarity of the image.
在一个实施例中,相位差获取模块1606用于获取至少两个子区域的相位差中的最大相位差以及最小相位差;将最小相位差作为前景相位差;将最大相位差作为背景相位差。In one embodiment, the phase difference obtaining module 1606 is configured to obtain the maximum phase difference and the minimum phase difference among the phase differences of at least two sub-regions; the minimum phase difference is regarded as the foreground phase difference; and the maximum phase difference is regarded as the background phase difference.
本实施例中的图像处理装置,获取至少两个子区域的相位差中的最大相位差以及最小相位差,将最小相位差作为前景相位差,将最大相位差作为背景相位差,能够仅获取两张图像进行合成,在提高图像清晰度的同时提高图像处理效率。The image processing device in this embodiment obtains the maximum phase difference and the minimum phase difference among the phase differences of at least two sub-regions, and uses the minimum phase difference as the foreground phase difference and the maximum phase difference as the background phase difference, so that only two The image is synthesized to improve image processing efficiency while improving image clarity.
在一个实施例中,电子设备包括图像传感器,图像传感器包括阵列排布的多个像素点组,每个所述像素点组包括阵列排布的M*N个像素点;每个像素点对应一个感光单元,其中,M和N均为大于或等于2的自然数。相位差获取模块1606用于根据每个像素点组包括的像素点的亮度值获取目标亮度图;对目标亮度图进行切分处理,根据切分处理的结果得到第一切分亮度图和第二切分亮度图;根据第一切分亮度图和第二切分亮度图中相互匹配的像素的位置差异,确定相互匹配的像素的相位差;根据相互匹配的像素的相位差确定至少两个子区域中每个子区域对应的相位差。In an embodiment, the electronic device includes an image sensor, the image sensor includes a plurality of pixel point groups arranged in an array, each of the pixel point groups includes M*N pixel points arranged in an array; each pixel point corresponds to one In the photosensitive unit, both M and N are natural numbers greater than or equal to 2. The phase difference obtaining module 1606 is used to obtain the target brightness map according to the brightness values of the pixels included in each pixel point group; perform segmentation processing on the target brightness map, and obtain the first segment brightness map and the second segment brightness map according to the results of the segmentation processing. Split brightness map; determine the phase difference of the matching pixels according to the position difference of the matching pixels in the first split brightness map and the second split brightness map; determine at least two sub-regions according to the phase difference of the matching pixels The phase difference corresponding to each sub-area in.
本实施例中的图像处理装置,通过根据图像传感器中每个像素点组包括的像素点的亮度值获取目标亮度图,在获取到目标亮度图后,对该目标亮度图进行切分处理,根据切分处理的结果得到第一切分亮度图和第二切分亮度图,接着,根据第一切分亮度图和第二切分亮度图中相互匹配的像素的位置差异,确定相互匹配的像素的相位差,而后,再根据相互匹配的像素的相位差确定至少两个子区域中每个子区域对应的相位差,这样,就可以利用图像传感器中每个像素点组包括的像素点的亮度值来获取该目标相位差图,因此,相较于利用稀疏设置的相位检测像素点来获取相位差的方式而言,本申请实施例中相互匹配的像素的相位差包含相对丰富的相位差信息,故而可以提高获取到的相位差精确度。The image processing device in this embodiment obtains a target brightness map according to the brightness values of the pixel points included in each pixel point group in the image sensor, and after obtaining the target brightness map, performs segmentation processing on the target brightness map, according to As a result of the segmentation processing, the first segmented brightness map and the second segmented brightness map are obtained. Then, based on the position difference of the matching pixels in the first segmented brightness map and the second segmented brightness map, the pixels that match each other are determined Then, according to the phase difference of the matched pixels, the phase difference corresponding to each of the at least two sub-regions is determined. In this way, the brightness value of the pixel points included in each pixel point group in the image sensor can be used to determine the phase difference. Obtain the target phase difference map. Therefore, compared with the method of obtaining phase difference by using sparsely arranged phase detection pixels, the phase difference of the matched pixels in the embodiment of the present application contains relatively rich phase difference information. The accuracy of the phase difference obtained can be improved.
在一个实施例中,相位差获取模块1606用于对目标亮度图进行切分处理,得到多个亮度图区域,每个亮度图区域包括目标亮度图中的一行像素,或者,每个亮度图区域包括目标亮度图中的一列像素;从多个亮度图区域中获取多个第一亮度图区域和多个第二亮度图区域,第一亮度图区域包括目标亮度图中偶数行的像素,或者,第一亮度图区域包括目标亮度图中偶数列的像素,第二亮度图区域包括目标亮度图中奇数行的像素,或者,第二亮度图区域包括目标亮度图中奇数列的像素;利用多个第一亮度图区域组成第一切分亮度图,利用多个第二亮度图区域组成第二切分亮度图。In one embodiment, the phase difference acquisition module 1606 is used to perform segmentation processing on the target brightness map to obtain multiple brightness map regions, each brightness map region includes a row of pixels in the target brightness map, or each brightness map region Including a column of pixels in the target brightness map; obtaining multiple first brightness map regions and multiple second brightness map regions from multiple brightness map regions, the first brightness map region including pixels in even rows of the target brightness map, or, The first luminance map area includes the pixels in the even-numbered columns of the target luminance map, the second luminance map area includes the pixels in the odd-numbered rows of the target luminance map, or the second luminance map area includes the pixels in the odd-numbered columns in the target luminance map; The first brightness map area composes a first segmented brightness map, and multiple second brightness map areas are used to compose a second segmented brightness map.
本申请实施例中的图像处理装置,不需要遮挡像素点来获取相位差,通过亮度切分的方式得到相对丰富的相位差信息,提高获取到的相位差精确度。The image processing device in the embodiment of the present application does not need to shield the pixels to obtain the phase difference, and obtains relatively rich phase difference information by means of brightness segmentation, which improves the accuracy of the obtained phase difference.
在一个实施例中,相位差获取模块1606用于当检测到子区域中包含水平线条时,将垂直相位差作为子区域对应的相位差;当检测到子区域中不包含水平线条时,将水平相位差作为子区域对应的相位差。In one embodiment, the phase difference acquisition module 1606 is configured to use the vertical phase difference as the phase difference corresponding to the sub-region when it is detected that the sub-region contains horizontal lines; when it is detected that the sub-region does not contain horizontal lines, the horizontal The phase difference is regarded as the phase difference corresponding to the sub-region.
本申请实施例中的图像处理装置,由于摄像头的一些特性,导致成像时候会存在相差。即线条可能存在拖影等问题。那么当检测到子区域中包含水平线条时,采用垂直相位差作为子区域对应的相位差;当检测到子区域中包含垂直线条时,采用水平相位差作为子区域对应的相位差,能够提高相位差获取精度,从而提高图像清晰度。In the image processing device in the embodiment of the present application, due to some characteristics of the camera, there may be a difference in imaging. That is, the lines may have problems such as smearing. Then when it is detected that the sub-region contains horizontal lines, the vertical phase difference is used as the phase difference corresponding to the sub-region; when it is detected that the sub-region contains vertical lines, the horizontal phase difference is used as the phase difference corresponding to the sub-region, which can improve the phase Poor acquisition accuracy, thereby improving image clarity.
在一个实施例中,对焦模块1608用于将每个目标相位差对应的子区域作为对焦区域,得到每个目标相位差对应的图像。In one embodiment, the focusing module 1608 is configured to use the sub-area corresponding to each target phase difference as a focus area to obtain an image corresponding to each target phase difference.
本实施例中的图像处理装置,通过将每个目标相位差对应的子区域作为对焦区域,得到每个目标相位差对应的图像,能够获取到不同焦点的图像进行合成,提高图像的清晰度。The image processing device in this embodiment uses the sub-area corresponding to each target phase difference as a focus area to obtain an image corresponding to each target phase difference, and can obtain images with different focal points for synthesis, thereby improving image clarity.
在一个实施例中,合成模块1610用于将每个目标相位差对应的图像划分为相同数量的子图像区域;获取每个子图像区域对应的清晰度;根据每个子图像区域对应的清晰度,确定相互匹配的子图像区域中清晰度最高的子图像区域;将清晰度最高的子图像区域进行合成,得到全准焦图像。In one embodiment, the synthesis module 1610 is configured to divide the image corresponding to each target phase difference into the same number of sub-image areas; obtain the definition corresponding to each sub-image area; determine according to the definition corresponding to each sub-image area The sub-image area with the highest definition among the matching sub-image areas; the sub-image area with the highest definition is synthesized to obtain a fully in-focus image.
本申请实施例中的图像处理装置,将每个目标相位差对应的图像划分为相同数量的子图像区域;获取每个子图像区域对应的清晰度;根据每个子图像区域对应的清晰度,确定相互匹配的子图像区域中清晰度最高的子图像区域;将清晰度最高的子图像区域进行合成,得到全准焦图像,能够快速得到全准焦图像,提高图像处理效率。The image processing device in the embodiment of the present application divides the image corresponding to each target phase difference into the same number of sub-image areas; obtains the definition corresponding to each sub-image area; and determines the mutual resolution according to the corresponding definition of each sub-image area. The sub-image area with the highest definition among the matched sub-image areas; the sub-image area with the highest definition is synthesized to obtain a full in-focus image, which can quickly obtain a full-in-focus image and improve image processing efficiency.
在一个实施例中,合成模块1610用于对每个目标相位差对应的图像卷积和采样处理,当满足预设迭代条件时,得到每个目标相位差对应的图像的高斯金字塔;根据每个目标相位差对应的图像 的高斯金字塔中每一层图像进行处理,得到每个目标相位差对应的图像的拉普拉斯金字塔;将每个目标相位差对应的图像的拉普拉斯金字塔进行融合,得到融合后的拉普拉斯金字塔;根据融合后的拉普拉斯金字塔进行重建处理,得到全准焦图像。In one embodiment, the synthesis module 1610 is used for convolution and sampling processing of the image corresponding to each target phase difference, and when the preset iterative conditions are met, the Gaussian pyramid of the image corresponding to each target phase difference is obtained; Each layer of the Gaussian pyramid of the image corresponding to the target phase difference is processed to obtain the Laplacian pyramid of the image corresponding to each target phase difference; the Laplacian pyramid of the image corresponding to each target phase difference is fused , The fused Laplacian pyramid is obtained; reconstruction processing is performed according to the fused Laplacian pyramid to obtain a fully in-focus image.
本实施例中的图像处理装置,对每个目标相位差对应的图像卷积和采样处理,当满足预设迭代条件时,得到每个目标相位差对应的图像的高斯金字塔,根据每个目标相位差对应的图像的高斯金字塔中每一层图像进行处理,得到每个目标相位差对应的图像的拉普拉斯金字塔,将每个目标相位差对应的图像的拉普拉斯金字塔进行融合,得到融合后的拉普拉斯金字塔,根据融合后的拉普拉斯金字塔进行重建处理,得到全准焦图像,能使各个区域之间的边界更加自然,提高图像的真实性以及清晰度。The image processing device in this embodiment performs convolution and sampling processing on the image corresponding to each target phase difference, and when the preset iterative conditions are met, the Gaussian pyramid of the image corresponding to each target phase difference is obtained, and the Gaussian pyramid is obtained according to each target phase difference. Each layer of the image in the Gaussian pyramid of the image corresponding to the difference is processed to obtain the Laplacian pyramid of the image corresponding to each target phase difference, and the Laplacian pyramid of the image corresponding to each target phase difference is merged to obtain The fused Laplacian Pyramid is reconstructed according to the fused Laplacian Pyramid to obtain a fully in-focus image, which can make the boundary between various regions more natural and improve the authenticity and clarity of the image.
在一个实施例中,合成模块1610用于提取每个目标相位差对应的图像的特征;将每个目标相位差对应的图像的特征融合,得到第一图像特征;对每个目标相位差对应的图像进行平均处理,得到平均图像;根据平均图像以及第一图像特征进行特征提取,得到第二图像特征;根据第二图像特征以及平均图像进行特征重建,得到全准焦图像。In one embodiment, the synthesis module 1610 is used to extract the features of the image corresponding to each target phase difference; fuse the features of the image corresponding to each target phase difference to obtain the first image feature; The image is averaged to obtain an average image; feature extraction is performed based on the average image and the first image feature to obtain the second image feature; the second image feature is reconstructed based on the second image feature and the average image to obtain a fully in-focus image.
本申请实施例中的图像处理装置,提取每个目标相位差对应的图像的特征,将每个目标相位差对应的图像的特征融合,得到第一图像特征,对每个目标相位差对应的图像进行平均处理,得到平均图像,根据平均图像以及第一图像特征进行特征提取,得到第二图像特征,根据第二图像特征以及平均图像进行特征重建,得到全准焦图像,能够采用神经网络的方式对图像进行合成,提高图像合成的准确性以及清晰度。The image processing device in the embodiment of the present application extracts the features of the image corresponding to each target phase difference, fuses the features of the image corresponding to each target phase difference, and obtains the first image feature. For the image corresponding to each target phase difference Perform averaging processing to obtain an average image, perform feature extraction based on the average image and the first image feature, obtain the second image feature, perform feature reconstruction based on the second image feature and the average image, and obtain a fully in-focus image. The neural network can be used Synthesize images to improve the accuracy and clarity of image synthesis.
在一个实施例中,预览图像获取模块1602用于获取预览图像中的感兴趣区域。划分模块1604用于将预览图像中的感兴趣区域划分为至少两个子区域。In one embodiment, the preview image acquisition module 1602 is used to acquire the region of interest in the preview image. The dividing module 1604 is configured to divide the region of interest in the preview image into at least two sub-regions.
申请实施例中的图像处理装置,获取预览图像中的感兴趣区域,将预览图像中的感兴趣区域划分为至少两个子区域,能够根据感兴趣区域进行对焦,从而保证感兴趣区域中的景物清晰,提高全准焦图像中感兴趣区域的图像清晰度。The image processing device in the application embodiment obtains the region of interest in the preview image, divides the region of interest in the preview image into at least two sub-regions, and can focus according to the region of interest, thereby ensuring that the scene in the region of interest is clear , To improve the image clarity of the region of interest in the all-in-focus image.
在一个实施例中,相位差获取模块1606用于获取场景模式;根据场景模式确定至少两个目标相位差。In one embodiment, the phase difference obtaining module 1606 is used to obtain a scene mode; and determine at least two target phase differences according to the scene mode.
本申请实施例中的图像处理装置,获取场景模式,根据场景模式确定至少两个目标相位差,能够根据不同场景模式快速确定目标相位差,达到不同场景对应的效果,提高图像处理效率以及图像效果的清晰度。The image processing device in the embodiment of the present application obtains the scene mode, determines at least two target phase differences according to the scene mode, can quickly determine the target phase difference according to different scene modes, achieves the effect corresponding to different scenes, and improves image processing efficiency and image effects The clarity.
上述图像处理装置中各个模块的划分仅用于举例说明,在其他实施例中,可将图像处理装置按照需要划分为不同的模块,以完成上述图像处理装置的全部或部分功能。The division of the modules in the above-mentioned image processing apparatus is only for illustration. In other embodiments, the image processing apparatus may be divided into different modules as required to complete all or part of the functions of the above-mentioned image processing apparatus.
关于图像处理装置的具体限定可以参见上文中对于图像处理方法的限定,在此不再赘述。上述图像处理装置中的各个模块可全部或部分通过软件、硬件及其组合来实现。上述各模块可以硬件形式内嵌于或独立于计算机设备中的处理器中,也可以以软件形式存储于计算机设备中的存储器中,以便于处理器调用执行以上各个模块对应的操作。For the specific definition of the image processing device, please refer to the above definition of the image processing method, which will not be repeated here. Each module in the above-mentioned image processing device may be implemented in whole or in part by software, hardware, and a combination thereof. The above-mentioned modules may be embedded in the form of hardware or independent of the processor in the computer equipment, or may be stored in the memory of the computer equipment in the form of software, so that the processor can call and execute the operations corresponding to the above-mentioned modules.
图17为一个实施例中电子设备的内部结构示意图。如图17所示,该电子设备包括通过系统总线连接的处理器和存储器。其中,该处理器用于提供计算和控制能力,支撑整个电子设备的运行。存储器可包括非易失性存储介质及内存储器。非易失性存储介质存储有操作系统和计算机程序。该计算机程序可被处理器所执行,以用于实现以下各个实施例所提供的一种图像处理方法。内存储器为非易失性存储介质中的操作系统计算机程序提供高速缓存的运行环境。该电子设备可以是手机、平板电脑或者个人数字助理或穿戴式设备等。Fig. 17 is a schematic diagram of the internal structure of an electronic device in an embodiment. As shown in FIG. 17, the electronic device includes a processor and a memory connected through a system bus. Among them, the processor is used to provide computing and control capabilities to support the operation of the entire electronic device. The memory may include a non-volatile storage medium and internal memory. The non-volatile storage medium stores an operating system and a computer program. The computer program can be executed by the processor to implement an image processing method provided in the following embodiments. The internal memory provides a cached operating environment for the operating system computer program in the non-volatile storage medium. The electronic device can be a mobile phone, a tablet computer, or a personal digital assistant or a wearable device.
本申请实施例中提供的图像处理装置中的各个模块的实现可为计算机程序的形式。该计算机程序可在终端或服务器上运行。该计算机程序构成的程序模块可存储在终端或服务器的存储器上。该计算机程序被处理器执行时,实现本申请实施例中所描述方法的操作。The implementation of each module in the image processing apparatus provided in the embodiment of the present application may be in the form of a computer program. The computer program can be run on a terminal or a server. The program module composed of the computer program can be stored in the memory of the terminal or the server. When the computer program is executed by the processor, the operation of the method described in the embodiment of the present application is realized.
本申请实施例还提供了一种计算机可读存储介质。一个或多个包含计算机可执行指令的非易失性计算机可读存储介质,当所述计算机可执行指令被一个或多个处理器执行时,使得所述处理器执行图像处理方法的操作。The embodiment of the present application also provides a computer-readable storage medium. One or more non-volatile computer-readable storage media containing computer-executable instructions, when the computer-executable instructions are executed by one or more processors, cause the processors to perform the operations of the image processing method.
一种包含指令的计算机程序产品,当其在计算机上运行时,使得计算机执行图像处理方法。A computer program product containing instructions that, when run on a computer, causes the computer to execute an image processing method.
本申请所使用的对存储器、存储、数据库或其它介质的任何引用可包括非易失性和/或易失性存储器。非易失性存储器可包括只读存储器(ROM)、可编程ROM(PROM)、电可编程ROM(EPROM)、电可擦除可编程ROM(EEPROM)或闪存。易失性存储器可包括随机存取存储器(RAM),它用作外部高速缓冲存储器。作为说明而非局限,RAM以多种形式可得,诸如静态RAM(SRAM)、动态RAM(DRAM)、同步DRAM(SDRAM)、双数据率SDRAM(DDR SDRAM)、增强型SDRAM(ESDRAM)、同步链路(Synchlink)DRAM(SLDRAM)、存储器总线(Rambus)直接RAM(RDRAM)、直接存储器总线动态RAM(DRDRAM)、以及存储器总线动态RAM(RDRAM)。Any reference to memory, storage, database, or other media used in this application may include non-volatile and/or volatile memory. Non-volatile memory may include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory. Volatile memory may include random access memory (RAM), which acts as external cache memory. As an illustration and not a limitation, RAM is available in many forms, such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), enhanced SDRAM (ESDRAM), synchronous Link (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
以上所述实施例仅表达了本申请的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对本申请专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本申请构思的前提下,还可以做出若干变形和改进,这些都属于本申请的保护范围。因此,本申请专利的保护范围应以所附权利要求为准。The above-mentioned embodiments only express several implementation manners of the present application, and the description is relatively specific and detailed, but it should not be understood as a limitation to the patent scope of the present application. It should be pointed out that for those of ordinary skill in the art, without departing from the concept of this application, several modifications and improvements can be made, and these all fall within the protection scope of this application. Therefore, the scope of protection of the patent of this application shall be subject to the appended claims.

Claims (20)

  1. 一种图像处理方法,其特征在于,应用于电子设备,包括:An image processing method, characterized in that it is applied to electronic equipment, and includes:
    获取预览图像;Obtain a preview image;
    将所述预览图像划分为至少两个子区域;Dividing the preview image into at least two sub-areas;
    获取所述至少两个子区域中每个子区域对应的相位差;Acquiring a phase difference corresponding to each of the at least two sub-regions;
    从所述每个子区域对应的相位差中确定至少两个目标相位差,所述至少两个目标相位差中包括目标前景相位差和目标背景相位差;Determining at least two target phase differences from the phase differences corresponding to each sub-region, where the at least two target phase differences include a target foreground phase difference and a target background phase difference;
    根据每个目标相位差进行对焦,得到所述每个目标相位差对应的图像;及Performing focusing according to the phase difference of each target to obtain an image corresponding to the phase difference of each target; and
    根据所述每个目标相位差对应的图像进行合成,得到全准焦图像。Synthesis is performed according to the images corresponding to the phase difference of each target to obtain a fully in-focus image.
  2. 根据权利要求1所述的方法,其特征在于,所述从所述每个子区域对应的相位差中确定至少两个目标相位差,所述至少两个目标相位差中包括目标前景相位差和目标背景相位差,包括:The method according to claim 1, wherein the at least two target phase differences are determined from the phase differences corresponding to each sub-region, and the at least two target phase differences include the target foreground phase difference and the target phase difference. Background phase difference, including:
    将所述至少两个子区域中每个子区域对应的相位差划分为前景相位差集合和背景相位差集合;Dividing the phase difference corresponding to each of the at least two sub-regions into a foreground phase difference set and a background phase difference set;
    获取所述前景相位差集合对应的第一相位差均值;Acquiring a first mean value of phase difference corresponding to the foreground phase difference set;
    获取所述背景相位差集合对应的第二相位差均值;Acquiring a second mean value of phase difference corresponding to the background phase difference set;
    将所述第一相位差均值作为所述目标前景相位差;Use the first mean value of the phase difference as the target foreground phase difference;
    将所述第二相位差均值作为所述目标背景相位差。The second mean value of the phase difference is used as the target background phase difference.
  3. 根据权利要求2所述的方法,其特征在于,所述方法还包括:The method according to claim 2, wherein the method further comprises:
    排除子区域对应的相位差中的最大相位差,得到剩余相位差集合;Excluding the largest phase difference among the phase differences corresponding to the sub-regions to obtain a residual phase difference set;
    所述将所述至少两个子区域中每个子区域对应的相位差划分为前景相位差集合和背景相位差集合,包括:The dividing the phase difference corresponding to each of the at least two sub-regions into a foreground phase difference set and a background phase difference set includes:
    将所述剩余相位差集合划分为前景相位差集合和背景相位差集合。The remaining phase difference set is divided into a foreground phase difference set and a background phase difference set.
  4. 根据权利要求1所述的方法,其特征在于,所述从所述每个子区域对应的相位差中确定至少两个目标相位差,所述至少两个目标相位差中包括前景相位差和背景相位差,包括:The method according to claim 1, wherein the at least two target phase differences are determined from the phase differences corresponding to each sub-region, and the at least two target phase differences include a foreground phase difference and a background phase Poor, including:
    获取至少两个子区域的相位差中的最大相位差以及最小相位差;Obtaining the maximum phase difference and the minimum phase difference among the phase differences of the at least two sub-regions;
    将所述最小相位差作为前景相位差;Taking the minimum phase difference as the foreground phase difference;
    将所述最大相位差作为背景相位差。The maximum phase difference is regarded as the background phase difference.
  5. 根据权利要求1所述的方法,其特征在于,所述电子设备包括图像传感器,所述图像传感器包括阵列排布的多个像素点组,每个所述像素点组包括阵列排布的M*N个像素点;每个像素点对应一个感光单元,其中,M和N均为大于或等于2的自然数;The method according to claim 1, wherein the electronic device comprises an image sensor, the image sensor comprises a plurality of pixel point groups arranged in an array, and each of the pixel point groups comprises M* arranged in an array. N pixels; each pixel corresponds to a photosensitive unit, where M and N are both natural numbers greater than or equal to 2;
    所述获取所述至少两个子区域中每个子区域对应的相位差,包括:The acquiring the phase difference corresponding to each of the at least two sub-regions includes:
    根据每个所述像素点组包括的像素点的亮度值获取目标亮度图;Acquiring a target brightness map according to the brightness value of the pixel points included in each pixel point group;
    对所述目标亮度图进行切分处理,根据切分处理的结果得到第一切分亮度图和第二切分亮度图;Performing segmentation processing on the target brightness map, and obtaining a first segmented brightness map and a second segmented brightness map according to the results of segmentation processing;
    根据所述第一切分亮度图和所述第二切分亮度图中相互匹配的像素的位置差异,确定所述相互匹配的像素的相位差;Determine the phase difference of the pixels that match each other according to the position difference of the pixels that match each other in the first split brightness map and the second split brightness map;
    根据所述相互匹配的像素的相位差确定所述至少两个子区域中每个子区域对应的相位差。The phase difference corresponding to each of the at least two sub-regions is determined according to the phase difference of the mutually matched pixels.
  6. 根据权利要求5所述的方法,其特征在于,所述对所述目标亮度图进行切分处理,根据切分处理的结果得到第一切分亮度图和第二切分亮度图,包括:The method according to claim 5, wherein the segmentation processing on the target brightness map, and obtaining the first segmented brightness map and the second segmented brightness map according to the results of the segmentation processing, comprises:
    对所述目标亮度图进行切分处理,得到多个亮度图区域,每个所述亮度图区域包括所述目标亮度图中的一行像素,或者,每个所述亮度图区域包括所述目标亮度图中的一列像素;The target brightness map is segmented to obtain multiple brightness map regions, each of the brightness map regions includes a row of pixels in the target brightness map, or each of the brightness map regions includes the target brightness A column of pixels in the picture;
    从所述多个亮度图区域中获取多个第一亮度图区域和多个第二亮度图区域,所述第一亮度图区域包括所述目标亮度图中偶数行的像素,或者,所述第一亮度图区域包括所述目标亮度图中偶数列的像素,所述第二亮度图区域包括所述目标亮度图中奇数行的像素,或者,所述第二亮度图区域包括所述目标亮度图中奇数列的像素;Acquire a plurality of first brightness map regions and a plurality of second brightness map regions from the plurality of brightness map regions, where the first brightness map region includes pixels in even rows of the target brightness map, or the first brightness map region A luminance map area includes pixels in even-numbered columns in the target luminance map, the second luminance map area includes pixels in odd rows in the target luminance map, or the second luminance map area includes the target luminance map Pixels in odd columns;
    利用所述多个第一亮度图区域组成所述第一切分亮度图,利用所述多个第二亮度图区域组成所 述第二切分亮度图。The plurality of first brightness map regions are used to form the first segmented brightness map, and the plurality of second brightness map regions are used to form the second segmented brightness map.
  7. 根据权利要求1所述的方法,其特征在于,所述电子设备包括图像传感器,所述图像传感器包括阵列排布的多个像素点组,每个所述像素点组包括阵列排布的M*N个像素点;每个像素点对应一个感光单元,其中,M和N均为大于或等于2的自然数;所述每个子区域对应的相位差包括水平相位差和垂直相位差;The method according to claim 1, wherein the electronic device comprises an image sensor, the image sensor comprises a plurality of pixel point groups arranged in an array, and each of the pixel point groups comprises M* arranged in an array. N pixel points; each pixel point corresponds to a photosensitive unit, where M and N are both natural numbers greater than or equal to 2; the phase difference corresponding to each sub-region includes a horizontal phase difference and a vertical phase difference;
    所述获取所述至少两个子区域中每个子区域对应的相位差,包括:The acquiring the phase difference corresponding to each of the at least two sub-regions includes:
    当检测到所述子区域中包含水平线条时,将所述垂直相位差作为所述子区域对应的相位差;When it is detected that the sub-region contains horizontal lines, use the vertical phase difference as the phase difference corresponding to the sub-region;
    当检测到所述子区域中不包含水平线条时,将所述水平相位差作为所述子区域对应的相位差。When it is detected that the sub-region does not contain horizontal lines, the horizontal phase difference is taken as the phase difference corresponding to the sub-region.
  8. 根据权利要求1所述的方法,其特征在于,所述根据每个目标相位差进行对焦,得到所述每个目标相位差对应的图像,包括:The method according to claim 1, wherein the focusing according to the phase difference of each target to obtain an image corresponding to the phase difference of each target comprises:
    将所述每个目标相位差对应的子区域作为对焦区域,得到每个目标相位差对应的图像。The sub-region corresponding to each target phase difference is used as a focus area, and an image corresponding to each target phase difference is obtained.
  9. 根据权利要求1至8任一项所述的方法,其特征在于,所述根据所述每个目标相位差对应的图像进行合成,得到全准焦图像,包括:The method according to any one of claims 1 to 8, wherein the synthesizing images corresponding to the phase difference of each target to obtain a fully in-focus image comprises:
    将所述每个目标相位差对应的图像划分为相同数量的子图像区域;Dividing the image corresponding to each target phase difference into the same number of sub-image areas;
    获取所述每个子图像区域对应的清晰度;Acquiring the definition corresponding to each sub-image area;
    根据所述每个子图像区域对应的清晰度,确定相互匹配的子图像区域中清晰度最高的子图像区域;Determining the sub-image area with the highest definition among the sub-image areas that match each other according to the corresponding definition of each sub-image area;
    将所述清晰度最高的子图像区域进行拼接合成,得到全准焦图像。The sub-image areas with the highest definition are spliced and synthesized to obtain a fully in-focus image.
  10. 根据权利要求1至8任一项所述的方法,其特征在于,所述根据所述每个目标相位差对应的图像进行合成,得到全准焦图像,包括:The method according to any one of claims 1 to 8, wherein the synthesizing images corresponding to the phase difference of each target to obtain a fully in-focus image comprises:
    对所述每个目标相位差对应的图像卷积和采样处理,当满足预设迭代条件时,得到所述每个目标相位差对应的图像的高斯金字塔;For the image convolution and sampling processing corresponding to each target phase difference, when a preset iterative condition is met, a Gaussian pyramid of the image corresponding to each target phase difference is obtained;
    根据所述每个目标相位差对应的图像的高斯金字塔中每一层图像进行处理,得到每个目标相位差对应的图像的拉普拉斯金字塔;Processing according to each layer of the image in the Gaussian pyramid of the image corresponding to each target phase difference to obtain the Laplacian pyramid of the image corresponding to each target phase difference;
    将所述每个目标相位差对应的图像的拉普拉斯金字塔进行融合,得到融合后的拉普拉斯金字塔;Fusing the Laplacian pyramid of the image corresponding to the phase difference of each target to obtain a fused Laplacian pyramid;
    根据所述融合后的拉普拉斯金字塔进行重建处理,得到全准焦图像。Perform reconstruction processing according to the fused Laplacian pyramid to obtain a fully in-focus image.
  11. 根据权利要求1至8任一项所述的方法,其特征在于,所述根据所述每个目标相位差对应的图像进行合成,得到全准焦图像,包括:The method according to any one of claims 1 to 8, wherein the synthesizing images corresponding to the phase difference of each target to obtain a fully in-focus image comprises:
    提取所述每个目标相位差对应的图像的特征;Extracting the feature of the image corresponding to each target phase difference;
    将所述每个目标相位差对应的图像的特征融合,得到第一图像特征;Fusing the features of the image corresponding to the phase difference of each target to obtain the first image feature;
    对所述每个目标相位差对应的图像进行平均处理,得到平均图像;Performing averaging processing on the image corresponding to each target phase difference to obtain an average image;
    根据所述平均图像以及所述第一图像特征进行特征提取,得到第二图像特征;Performing feature extraction according to the average image and the first image feature to obtain a second image feature;
    根据所述第二图像特征以及所述平均图像进行特征重建,得到全准焦图像。Perform feature reconstruction according to the second image feature and the average image to obtain a fully in-focus image.
  12. 根据权利要求1至8任一项所述的方法,其特征在于,所述获取预览图像,包括:The method according to any one of claims 1 to 8, wherein said obtaining a preview image comprises:
    获取预览图像中的感兴趣区域;Obtain the region of interest in the preview image;
    所述将所述预览图像划分为至少两个子区域,包括:The dividing the preview image into at least two sub-areas includes:
    将预览图像中的感兴趣区域划分为至少两个子区域。The region of interest in the preview image is divided into at least two sub-regions.
  13. 一种图像处理装置,其特征在于,包括:An image processing device, characterized in that it comprises:
    预览图像获取模块,用于获取预览图像;Preview image acquisition module for acquiring preview images;
    划分模块,用于将所述预览图像划分为至少两个子区域;A dividing module, configured to divide the preview image into at least two sub-areas;
    相位差获取模块,用于获取所述至少两个子区域中每个子区域对应的相位差;A phase difference acquiring module, configured to acquire the phase difference corresponding to each of the at least two sub-regions;
    所述相位差获取模块,还用于从所述每个子区域对应的相位差中确定至少两个目标相位差,所述至少两个目标相位差中包括目标前景相位差和目标背景相位差;The phase difference acquisition module is further configured to determine at least two target phase differences from the phase difference corresponding to each sub-region, and the at least two target phase differences include a target foreground phase difference and a target background phase difference;
    对焦模块,用于根据每个目标相位差进行对焦,得到所述每个目标相位差对应的图像;A focusing module, configured to perform focusing according to the phase difference of each target to obtain an image corresponding to the phase difference of each target;
    合成模块,用于根据所述每个目标相位差对应的图像进行合成,得到全准焦图像。The synthesis module is used for synthesizing the image corresponding to each target phase difference to obtain a fully in-focus image.
  14. 根据权利要求13所述的装置,其特征在于,所述相位差获取模块还用于将所述至少两个子区域中每个子区域对应的相位差划分为前景相位差集合和背景相位差集合;The apparatus according to claim 13, wherein the phase difference obtaining module is further configured to divide the phase difference corresponding to each of the at least two sub-regions into a foreground phase difference set and a background phase difference set;
    获取所述前景相位差集合对应的第一相位差均值;Acquiring a first mean value of phase difference corresponding to the foreground phase difference set;
    获取所述背景相位差集合对应的第二相位差均值;Acquiring a second mean value of phase difference corresponding to the background phase difference set;
    将所述第一相位差均值作为所述目标前景相位差;Use the first mean value of the phase difference as the target foreground phase difference;
    将所述第二相位差均值作为所述目标背景相位差。The second mean value of the phase difference is used as the target background phase difference.
  15. 根据权利要求14所述的装置,其特征在于,所述相位差获取模块用于排除子区域对应的相位差中的最大相位差,得到剩余相位差集合;将所述剩余相位差集合划分为前景相位差集合和背景相位差集合。The apparatus according to claim 14, wherein the phase difference acquisition module is configured to exclude the maximum phase difference among the phase differences corresponding to the sub-regions to obtain a residual phase difference set; and divide the residual phase difference set into foreground The phase difference set and the background phase difference set.
  16. 根据权利要求13所述的装置,其特征在于,所述相位差获取模块用于获取至少两个子区域的相位差中的最大相位差以及最小相位差;The device according to claim 13, wherein the phase difference obtaining module is configured to obtain the maximum phase difference and the minimum phase difference among the phase differences of at least two sub-regions;
    将所述最小相位差作为前景相位差;Taking the minimum phase difference as the foreground phase difference;
    将所述最大相位差作为背景相位差。The maximum phase difference is regarded as the background phase difference.
  17. 根据权利要求13所述的装置,其特征在于,电子设备包括图像传感器,所述图像传感器包括阵列排布的多个像素点组,每个所述像素点组包括阵列排布的M*N个像素点;每个像素点对应一个感光单元,其中,M和N均为大于或等于2的自然数;The device according to claim 13, wherein the electronic device comprises an image sensor, the image sensor comprises a plurality of pixel point groups arranged in an array, and each of the pixel point groups comprises M*N pixel point groups arranged in an array. Pixels; each pixel corresponds to a photosensitive unit, where M and N are both natural numbers greater than or equal to 2;
    所述相位差获取模块用于根据每个所述像素点组包括的像素点的亮度值获取目标亮度图;The phase difference obtaining module is configured to obtain a target brightness map according to the brightness value of the pixel points included in each pixel point group;
    对所述目标亮度图进行切分处理,根据切分处理的结果得到第一切分亮度图和第二切分亮度图;Performing segmentation processing on the target brightness map, and obtaining a first segmented brightness map and a second segmented brightness map according to the results of segmentation processing;
    根据所述第一切分亮度图和所述第二切分亮度图中相互匹配的像素的位置差异,确定所述相互匹配的像素的相位差;Determine the phase difference of the pixels that match each other according to the position difference of the pixels that match each other in the first split brightness map and the second split brightness map;
    根据所述相互匹配的像素的相位差确定所述至少两个子区域中每个子区域对应的相位差。The phase difference corresponding to each of the at least two sub-regions is determined according to the phase difference of the mutually matched pixels.
  18. 根据权利要求17所述的装置,其特征在于,所述相位差获取模块用于对所述目标亮度图进行切分处理,得到多个亮度图区域,每个所述亮度图区域包括所述目标亮度图中的一行像素,或者,每个所述亮度图区域包括所述目标亮度图中的一列像素;The device according to claim 17, wherein the phase difference acquisition module is used to perform segmentation processing on the target brightness map to obtain a plurality of brightness map regions, each of the brightness map regions including the target A row of pixels in the brightness map, or each of the brightness map regions includes a column of pixels in the target brightness map;
    从所述多个亮度图区域中获取多个第一亮度图区域和多个第二亮度图区域,所述第一亮度图区域包括所述目标亮度图中偶数行的像素,或者,所述第一亮度图区域包括所述目标亮度图中偶数列的像素,所述第二亮度图区域包括所述目标亮度图中奇数行的像素,或者,所述第二亮度图区域包括所述目标亮度图中奇数列的像素;Acquire a plurality of first brightness map regions and a plurality of second brightness map regions from the plurality of brightness map regions, where the first brightness map region includes pixels in even rows of the target brightness map, or the first brightness map region A luminance map area includes pixels in even-numbered columns in the target luminance map, the second luminance map area includes pixels in odd rows in the target luminance map, or the second luminance map area includes the target luminance map Pixels in odd columns;
    利用所述多个第一亮度图区域组成所述第一切分亮度图,利用所述多个第二亮度图区域组成所述第二切分亮度图。The multiple first brightness map regions are used to form the first split brightness map, and the multiple second brightness map regions are used to form the second split brightness map.
  19. 一种电子设备,包括存储器及处理器,所述存储器中储存有计算机程序,所述计算机程序被所述处理器执行时,使得所述处理器执行如权利要求1至12中任一项所述的图像处理方法的步骤。An electronic device, comprising a memory and a processor, and a computer program is stored in the memory. When the computer program is executed by the processor, the processor executes any one of claims 1 to 12 The steps of the image processing method.
  20. 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现如权利要求1至12中任一项所述的方法的步骤。A computer-readable storage medium having a computer program stored thereon, wherein the computer program implements the steps of the method according to any one of claims 1 to 12 when the computer program is executed by a processor.
PCT/CN2020/126122 2019-11-12 2020-11-03 Image processing method and apparatus, electronic device, and computer readable storage medium WO2021093635A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911101432.0 2019-11-12
CN201911101432.0A CN112866549B (en) 2019-11-12 2019-11-12 Image processing method and device, electronic equipment and computer readable storage medium

Publications (1)

Publication Number Publication Date
WO2021093635A1 true WO2021093635A1 (en) 2021-05-20

Family

ID=75912513

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/126122 WO2021093635A1 (en) 2019-11-12 2020-11-03 Image processing method and apparatus, electronic device, and computer readable storage medium

Country Status (2)

Country Link
CN (1) CN112866549B (en)
WO (1) WO2021093635A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113468702A (en) * 2021-07-22 2021-10-01 久瓴(江苏)数字智能科技有限公司 Pipeline arrangement method and device and computer readable storage medium
CN113962859A (en) * 2021-10-26 2022-01-21 北京有竹居网络技术有限公司 Panorama generation method, device, equipment and medium
CN115022535A (en) * 2022-05-20 2022-09-06 深圳福鸽科技有限公司 Image processing method and device and electronic equipment
CN115314635A (en) * 2022-08-03 2022-11-08 Oppo广东移动通信有限公司 Model training method and device for determining defocus amount

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113259596B (en) * 2021-07-14 2021-10-08 北京小米移动软件有限公司 Image generation method, phase detection focusing method and device
CN114040081A (en) * 2021-11-30 2022-02-11 维沃移动通信有限公司 Image sensor, camera module, electronic device, focusing method and medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012215700A (en) * 2011-03-31 2012-11-08 Fujifilm Corp Imaging device and imaging program
CN105100615A (en) * 2015-07-24 2015-11-25 青岛海信移动通信技术股份有限公司 Image preview method, apparatus and terminal
CN106031154A (en) * 2014-02-19 2016-10-12 三星电子株式会社 Method for processing image and electronic apparatus therefor
CN106454289A (en) * 2016-11-29 2017-02-22 广东欧珀移动通信有限公司 Control method, control device and electronic device
CN110166680A (en) * 2019-06-28 2019-08-23 Oppo广东移动通信有限公司 Equipment imaging method, device, storage medium and electronic equipment

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2277855A1 (en) * 1999-07-14 2001-01-14 Solvision Method and system of measuring the height of weld beads in a printed circuit
CN105120154A (en) * 2015-08-20 2015-12-02 深圳市金立通信设备有限公司 Image processing method and terminal
CN106060407A (en) * 2016-07-29 2016-10-26 努比亚技术有限公司 Focusing method and terminal
CN106572305A (en) * 2016-11-03 2017-04-19 乐视控股(北京)有限公司 Image shooting method, image processing method, apparatuses and electronic device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012215700A (en) * 2011-03-31 2012-11-08 Fujifilm Corp Imaging device and imaging program
CN106031154A (en) * 2014-02-19 2016-10-12 三星电子株式会社 Method for processing image and electronic apparatus therefor
CN105100615A (en) * 2015-07-24 2015-11-25 青岛海信移动通信技术股份有限公司 Image preview method, apparatus and terminal
CN106454289A (en) * 2016-11-29 2017-02-22 广东欧珀移动通信有限公司 Control method, control device and electronic device
CN110166680A (en) * 2019-06-28 2019-08-23 Oppo广东移动通信有限公司 Equipment imaging method, device, storage medium and electronic equipment

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113468702A (en) * 2021-07-22 2021-10-01 久瓴(江苏)数字智能科技有限公司 Pipeline arrangement method and device and computer readable storage medium
CN113468702B (en) * 2021-07-22 2024-03-22 久瓴(江苏)数字智能科技有限公司 Pipeline arrangement method, pipeline arrangement device and computer readable storage medium
CN113962859A (en) * 2021-10-26 2022-01-21 北京有竹居网络技术有限公司 Panorama generation method, device, equipment and medium
CN115022535A (en) * 2022-05-20 2022-09-06 深圳福鸽科技有限公司 Image processing method and device and electronic equipment
CN115022535B (en) * 2022-05-20 2024-03-08 深圳福鸽科技有限公司 Image processing method and device and electronic equipment
CN115314635A (en) * 2022-08-03 2022-11-08 Oppo广东移动通信有限公司 Model training method and device for determining defocus amount
CN115314635B (en) * 2022-08-03 2024-03-26 Oppo广东移动通信有限公司 Model training method and device for defocus determination

Also Published As

Publication number Publication date
CN112866549A (en) 2021-05-28
CN112866549B (en) 2022-04-12

Similar Documents

Publication Publication Date Title
WO2021093635A1 (en) Image processing method and apparatus, electronic device, and computer readable storage medium
KR102278776B1 (en) Image processing method, apparatus, and apparatus
CN110428366B (en) Image processing method and device, electronic equipment and computer readable storage medium
US8749694B2 (en) Methods and apparatus for rendering focused plenoptic camera data using super-resolved demosaicing
US8724000B2 (en) Methods and apparatus for super-resolution in integral photography
CN110536057B (en) Image processing method and device, electronic equipment and computer readable storage medium
US8340512B2 (en) Auto focus technique in an image capture device
WO2011065738A2 (en) Image processing apparatus and method
JP6308748B2 (en) Image processing apparatus, imaging apparatus, and image processing method
JP2014007730A (en) Information processing method, apparatus, and program
WO2021082883A1 (en) Main body detection method and apparatus, and electronic device and computer readable storage medium
JP2019533957A (en) Photography method for terminal and terminal
US11282176B2 (en) Image refocusing
US10469728B2 (en) Imaging device having a lens array of micro lenses
CN112087571A (en) Image acquisition method and device, electronic equipment and computer readable storage medium
JP6544978B2 (en) Image output apparatus, control method therefor, imaging apparatus, program
CN113875219A (en) Image processing method and device, electronic equipment and computer readable storage medium
Georgiev et al. Rich image capture with plenoptic cameras
CN112019734B (en) Image acquisition method and device, electronic equipment and computer readable storage medium
CN112866655B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN112866675B (en) Depth map generation method and device, electronic equipment and computer-readable storage medium
JP2019016975A (en) Image processing system and image processing method, imaging apparatus, program
CN112866547B (en) Focusing method and device, electronic equipment and computer readable storage medium
WO2021093528A1 (en) Focusing method and apparatus, and electronic device and computer readable storage medium
CN112866554B (en) Focusing method and device, electronic equipment and computer readable storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20888614

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20888614

Country of ref document: EP

Kind code of ref document: A1