CN112866549B - Image processing method and device, electronic equipment and computer readable storage medium - Google Patents

Image processing method and device, electronic equipment and computer readable storage medium Download PDF

Info

Publication number
CN112866549B
CN112866549B CN201911101432.0A CN201911101432A CN112866549B CN 112866549 B CN112866549 B CN 112866549B CN 201911101432 A CN201911101432 A CN 201911101432A CN 112866549 B CN112866549 B CN 112866549B
Authority
CN
China
Prior art keywords
phase difference
image
target
sub
regions
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911101432.0A
Other languages
Chinese (zh)
Other versions
CN112866549A (en
Inventor
贾玉虎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201911101432.0A priority Critical patent/CN112866549B/en
Priority to PCT/CN2020/126122 priority patent/WO2021093635A1/en
Publication of CN112866549A publication Critical patent/CN112866549A/en
Application granted granted Critical
Publication of CN112866549B publication Critical patent/CN112866549B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/698Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio

Abstract

The application relates to an image processing method and device, an electronic device and a computer readable storage medium. The method comprises the following steps: acquiring a preview image; dividing the preview image into at least two sub-regions; acquiring a phase difference corresponding to each subarea in at least two subareas; determining at least two target phase differences from the phase difference corresponding to each subregion, wherein the at least two target phase differences comprise a target foreground phase difference and a target background phase difference; focusing according to each target phase difference to obtain an image corresponding to each target phase difference; and synthesizing the images corresponding to the target phase differences to obtain the full-focus images. The method can improve the definition of the image.

Description

Image processing method and device, electronic equipment and computer readable storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image processing method and apparatus, an electronic device, and a computer-readable storage medium.
Background
The current focusing method is to focus within a rectangular frame. The rectangular frame range includes foreground and background, and focusing can only be achieved to a certain position so as to achieve focusing. When focusing on the foreground, the background is out of focus; when focused on the background, the foreground is out of focus. The traditional image processing mode has the problem of low image definition.
Disclosure of Invention
The embodiment of the application provides an image processing method and device, electronic equipment and a computer readable storage medium, which can improve image definition.
An image processing method applied to an electronic device includes:
acquiring a preview image;
dividing the preview image into at least two sub-regions;
acquiring a phase difference corresponding to each sub-area in the at least two sub-areas;
determining at least two target phase differences from the phase differences corresponding to each sub-region, wherein the at least two target phase differences comprise a target foreground phase difference and a target background phase difference;
focusing according to each target phase difference to obtain an image corresponding to each target phase difference;
and synthesizing the images corresponding to the target phase differences to obtain a full-focus image.
An image processing apparatus characterized by comprising:
the preview image acquisition module is used for acquiring a preview image;
a dividing module, configured to divide the preview image into at least two sub-regions;
the phase difference acquisition module is used for acquiring a phase difference corresponding to each subarea in the at least two subareas;
the phase difference obtaining module is further configured to determine at least two target phase differences from the phase difference corresponding to each sub-region, where the at least two target phase differences include a target foreground phase difference and a target background phase difference;
the focusing module is used for focusing according to each target phase difference to obtain an image corresponding to each target phase difference;
and the synthesis module is used for synthesizing the images corresponding to the target phase differences to obtain full focus images.
An electronic device comprising a memory and a processor, the memory having stored therein a computer program that, when executed by the processor, causes the processor to perform the steps of:
acquiring a preview image;
dividing the preview image into at least two sub-regions;
acquiring a phase difference corresponding to each sub-area in the at least two sub-areas;
determining at least two target phase differences from the phase differences corresponding to each sub-region, wherein the at least two target phase differences comprise a target foreground phase difference and a target background phase difference;
focusing according to each target phase difference to obtain an image corresponding to each target phase difference;
and synthesizing the images corresponding to the target phase differences to obtain a full-focus image.
A computer-readable storage medium, on which a computer program is stored which, when executed by a processor, carries out the steps of:
acquiring a preview image;
dividing the preview image into at least two sub-regions;
acquiring a phase difference corresponding to each sub-area in the at least two sub-areas;
determining at least two target phase differences from the phase differences corresponding to each sub-region, wherein the at least two target phase differences comprise a target foreground phase difference and a target background phase difference;
focusing according to each target phase difference to obtain an image corresponding to each target phase difference;
and synthesizing the images corresponding to the target phase differences to obtain a full-focus image.
The image processing method and device, the electronic device and the computer-readable storage medium acquire the preview image, divide the preview image into at least two sub-areas, acquire a phase difference corresponding to each sub-area in the at least two sub-areas, determine at least two target phase differences from the phase difference corresponding to each sub-area, perform focusing according to each target phase difference to acquire an image corresponding to each target phase difference, and can acquire at least two images in different focuses, wherein one image is a background quasi-focus image and the other image is a foreground quasi-focus image, and the images corresponding to each target phase difference are synthesized to acquire a full quasi-focus image, so that images with fewer out-of-focus areas can be acquired, and the definition of the images is improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a diagram of an exemplary embodiment of an image processing method;
FIG. 2 is a flow diagram of a method of image processing in one embodiment;
FIG. 3 is a schematic diagram of phase focusing in one embodiment;
FIG. 4 is a diagram illustrating an embodiment of paired phase detection pixels among pixels included in an image sensor;
FIG. 5 is a schematic diagram of a portion of an electronic device in one embodiment;
FIG. 6 is a schematic diagram illustrating a portion of an image sensor 504, in accordance with one embodiment;
FIG. 7 is a diagram illustrating a structure of a pixel in one embodiment;
FIG. 8 is a schematic diagram showing an internal structure of an image sensor according to an embodiment;
FIG. 9 is a diagram illustrating a pixel group Z according to an embodiment;
FIG. 10 is a schematic flow chart illustrating a process of obtaining a phase difference corresponding to each sub-region in one embodiment;
FIG. 11 is a diagram illustrating a slicing process performed on a target luminance graph in a first direction according to one embodiment;
FIG. 12 is a diagram illustrating a slicing process performed on a target luminance graph in a second direction in one embodiment;
FIG. 13 is a schematic flow diagram of a synthesis resulting in a full quasigraph in one embodiment;
FIG. 14 is a schematic flow chart of the synthesis of a full confocal map in another embodiment;
FIG. 15 is a schematic flow chart of synthesis of a full confocal map in yet another embodiment;
FIG. 16 is a block diagram showing the configuration of an image processing apparatus according to an embodiment;
fig. 17 is a schematic diagram of an internal structure of an electronic device in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
It will be understood that, as used herein, the terms "first," "second," and the like may be used herein to describe various elements, but these elements are not limited by these terms. These terms are only used to distinguish one datum from another. For example, the first phase difference average may be referred to as a second phase difference average, and similarly, the second phase difference average may be referred to as a first phase difference average, without departing from the scope of the present application. The first phase difference average and the second phase difference average are both phase difference averages, but they are not the same phase difference average. The first image feature may be referred to as a second image feature, and similarly, the second image feature may be referred to as a first image feature. The first image feature and the second image feature are both image features, but they are not the same image feature.
The embodiment of the application provides electronic equipment. For convenience of explanation, only the parts related to the embodiments of the present application are shown, and details of the specific technology are not disclosed. The electronic device may be any terminal device including a mobile phone, a tablet computer, a PDA (Personal Digital Assistant), a Point of Sales (POS), a vehicle-mounted computer, a wearable device, and the like, and the electronic device is taken as the mobile phone as an example. The electronic device includes therein an Image Processing circuit, which may be implemented using hardware and/or software components, and may include various Processing units defining an ISP (Image Signal Processing) pipeline. FIG. 1 is a schematic diagram of an image processing circuit in one embodiment. As shown in fig. 1, for convenience of explanation, only aspects of the image processing technology related to the embodiments of the present application are shown.
As shown in fig. 1, the image processing circuit includes an ISP processor 140 and control logic 150. The image data captured by the imaging device 110 is first processed by the ISP processor 140, and the ISP processor 140 analyzes the image data to capture image statistics that may be used to determine and/or control one or more parameters of the imaging device 110. The imaging device 110 may include a camera having one or more lenses 112 and an image sensor 114. The image sensor 114 may include an array of color filters (e.g., Bayer filters), and the image sensor 114 may acquire light intensity and wavelength information captured with each imaging pixel of the image sensor 114 and provide a set of raw image data that may be processed by the ISP processor 140. The attitude sensor 120 (e.g., three-axis gyroscope, hall sensor, accelerometer) may provide parameters of the acquired image processing (e.g., anti-shake parameters) to the ISP processor 140 based on the type of interface of the attitude sensor 120. The attitude sensor 120 interface may utilize an SMIA (Standard Mobile Imaging Architecture) interface, other serial or parallel camera interfaces, or a combination of the above.
In addition, the image sensor 114 may also send raw image data to the attitude sensor 120, the sensor 120 may provide the raw image data to the ISP processor 140 based on the type of interface of the attitude sensor 120, or the attitude sensor 120 may store the raw image data in the image memory 130.
The ISP processor 140 processes the raw image data pixel by pixel in a variety of formats. For example, each image pixel may have a bit depth of 8, 10, 12, or 14 bits, and the ISP processor 140 may perform one or more image processing operations on the raw image data, gathering statistical information about the image data. Wherein the image processing operations may be performed with the same or different bit depth precision.
The ISP processor 140 may also receive image data from the image memory 130. For example, the attitude sensor 120 interface sends raw image data to the image memory 130, and the raw image data in the image memory 130 is then provided to the ISP processor 140 for processing. The image Memory 130 may be a portion of a Memory device, a storage device, or a separate dedicated Memory within an electronic device, and may include a DMA (Direct Memory Access) feature.
Upon receiving raw image data from the image sensor 114 interface or from the attitude sensor 120 interface or from the image memory 130, the ISP processor 140 may perform one or more image processing operations, such as temporal filtering. The processed image data may be sent to image memory 130 for additional processing before being displayed. ISP processor 140 receives processed data from image memory 130 and performs image data processing on the processed data in the raw domain and in the RGB and YCbCr color spaces. The image data processed by ISP processor 140 may be output to display 160 for viewing by a user and/or further processed by a Graphics Processing Unit (GPU). Further, the output of the ISP processor 140 may also be sent to the image memory 130, and the display 160 may read image data from the image memory 130. In one embodiment, image memory 130 may be configured to implement one or more frame buffers.
The statistical data determined by the ISP processor 140 may be transmitted to the control logic 150 unit. For example, the statistical data may include image sensor 114 statistics such as gyroscope vibration frequency, auto-exposure, auto-white balance, auto-focus, flicker detection, black level compensation, lens 112 shading correction, and the like. The control logic 150 may include a processor and/or microcontroller that executes one or more routines (e.g., firmware) that may determine control parameters of the imaging device 110 and control parameters of the ISP processor 140 based on the received statistical data. For example, the control parameters of the imaging device 110 may include attitude sensor 120 control parameters (e.g., gain, integration time of exposure control, anti-shake parameters, etc.), camera flash control parameters, camera anti-shake displacement parameters, lens 112 control parameters (e.g., focal length for focusing or zooming), or a combination of these parameters. The ISP control parameters may include gain levels and color correction matrices for automatic white balance and color adjustment (e.g., during RGB processing), as well as lens 112 shading correction parameters.
In one embodiment, the image sensor 114 in the imaging device (camera) may include a plurality of pixel groups arranged in an array, wherein each pixel group includes a plurality of pixels arranged in an array, and each pixel includes a plurality of sub-pixels arranged in an array.
A first image is acquired through the lens 112 and the image sensor 114 in the imaging device (camera) 110 and sent to the ISP processor 140. After receiving the first image, the ISP processor 140 may perform main body detection on the first image to obtain the region of interest in the first image, or may obtain the region of interest by obtaining a region selected by the user as the region of interest, or by obtaining the region of interest in other manners, which is not limited to this.
The ISP processor 140 is configured to obtain a preview image, divide the preview image into at least two sub-regions, obtain a phase difference corresponding to each sub-region in the at least two sub-regions, determine at least two target phase differences according to the phase difference corresponding to each sub-region, perform focusing according to each target phase difference to obtain an image corresponding to each target phase difference, and perform synthesis according to the image corresponding to each target phase difference to obtain a full-focus image. The ISP processor 140 may send information regarding the target sub-area, such as location information, contour information, etc., to the control logic 150.
After receiving the information about the target area, the control logic 150 controls the lens 112 in the imaging device (camera) to move so as to focus on the position in the actual scene corresponding to the target area.
FIG. 2 is a flow diagram of a method of image processing in one embodiment. As shown in fig. 2, an image processing method applied to an electronic device includes steps 202 to 212.
Step 202, a preview image is acquired.
The number of cameras of the electronic equipment is not limited. For example, the number of … may be 1 or 2, but is not limited thereto. The form of the camera provided in the electronic device is not limited, and for example, the camera may be a camera built in the electronic device or a camera externally provided to the electronic device. The camera can be a front camera or a rear camera. The camera on the electronic device may be any type of camera. For example, the camera may be a color camera, a black and white camera, a depth camera, a telephoto camera, a wide angle camera, etc., without being limited thereto.
The preview image may be a visible light image. The preview image refers to an image presented on a screen of the electronic device when the camera is not shooting. The preview image may be a preview image of the current frame.
Specifically, the electronic equipment acquires a preview image through a camera and displays the preview image on a display screen.
Step 204, the preview image is divided into at least two sub-areas.
The sub-area refers to an image area in the preview image. The sub-regions are part of the image. I.e. the sub-area comprises a part of the pixels of the preview image. The size and shape of each sub-region obtained by dividing the preview image may be the same or different, and one of them may be the same or the other may be different. The specific division method is not limited.
Specifically, the electronic device divides the preview image into at least two sub-regions. The electronic device may divide the preview image into M × N sub-regions. N and M are both positive integers, and the values of N and M can be the same or different. For example, the preview image has pixels of 100 × 100 and is divided into 4 sub-regions, and then the pixels of each sub-region are 25 × 25.
Step 206, a phase difference corresponding to each of the at least two sub-regions is obtained.
The phase difference refers to a difference in position of an image formed by imaging light rays incident on the lens from different directions on the image sensor.
Specifically, the electronic device includes an image sensor, which may include a plurality of pixel groups arranged in an array, each pixel group including M × N pixels arranged in an array; each pixel point corresponds to a photosensitive unit, wherein M and N are both natural numbers which are greater than or equal to 2. Then, the phase difference corresponding to each sub-region may include a first phase difference and a second phase difference. A preset included angle is formed between a first direction corresponding to the first phase difference and a second direction corresponding to the second phase difference. Wherein the preset included angle may be any included angle other than 0 degrees, 180 degrees, and 360 degrees. I.e. the phase difference for each sub-region may be two. The electronic equipment acquires the reliability of the first phase difference and the reliability of the second phase difference; determining a phase difference with higher reliability in the reliability of the second phase difference and the reliability of the second phase difference; and taking the phase difference with high reliability as the phase difference corresponding to the sub-region.
In an embodiment, in order to perform phase detection autofocus, some phase detection pixel points, which may also be referred to as shielding pixel points, may be generally set in pairs among pixel points included in an image sensor, where one phase detection pixel point in each phase detection pixel point pair performs left-side shielding and the other phase detection pixel point performs right-side shielding, so that an imaging light beam emitted to each phase detection pixel point pair may be separated into a left portion and a right portion, and a phase difference corresponding to each sub-window may be obtained by comparing images formed by the left portion and the right portion of the imaging light beam.
And 208, determining at least two target phase differences from the phase differences corresponding to each sub-area, wherein the at least two target phase differences comprise a target foreground phase difference and a target background phase difference.
Here, the foreground refers to a portion having a small depth in the image. The foreground contains the subject. The foreground is typically the object that the user wants to focus on. The at least two target phase differences include a foreground phase difference and a background phase difference, and may further include other phase differences. Such as a phase difference between the foreground and the background.
Specifically, the electronic device determines at least two target phase differences from the phase differences corresponding to each region, wherein the at least two target phase differences at least include a target foreground phase difference and a target background phase difference.
And step 210, focusing according to each target phase difference to obtain an image corresponding to each target phase difference.
The focusing refers to a process of changing the object distance and the distance position through a focusing mechanism of the electronic equipment to enable the shot object to be imaged clearly. Focusing may refer to auto-focusing. Autofocus may refer to Phase Detection Auto Focus (PDAF) and other autofocus approaches combined with Phase Focus. The phase focusing is realized by acquiring a phase difference through a sensor, calculating an out-of-focus value according to the phase difference and controlling the lens to move a corresponding distance according to the out-of-focus value. Phase focusing may be combined with other focusing approaches, such as continuous autofocus, laser focusing, etc.
Specifically, when a photographing instruction is received, the electronic device focuses according to each target phase difference of the at least two target phase differences to obtain an image corresponding to each target phase difference. The electronic equipment can calculate out-of-focus values corresponding to each target phase difference according to each target phase difference in the at least two target phase differences, and control the lens to move for corresponding distances according to each out-of-focus value to obtain images corresponding to each target phase difference. The out-of-focus value refers to a distance between a current position of the image sensor and a position where the image sensor should be in an in-focus state. Each phase difference has a corresponding defocus value. The defocus values corresponding to each phase difference may be the same or different. The relationship between the phase difference and the defocus value can be obtained by pre-calibration. For example, the relationship between the phase difference and the defocus value may be linear or nonlinear.
For example, the at least two target phase differences include a target phase difference a, a target phase difference B, and a target phase difference C. The target phase difference A is a target foreground phase difference, the target phase difference C is a target background phase difference, and the target phase difference B is a phase difference with a value between the target phase difference A and the target phase difference C. And calculating to obtain an out-of-focus value A according to the target phase difference A, and controlling the lens to move for a corresponding distance according to the out-of-focus value A to obtain an image A corresponding to the target phase difference A. And calculating to obtain an out-of-focus value B according to the target phase difference B, and controlling the lens to move for a corresponding distance according to the out-of-focus value B to obtain an image B corresponding to the target phase difference B. And calculating to obtain an out-of-focus value C according to the target phase difference C, and controlling the lens to move for a corresponding distance according to the out-of-focus value C to obtain an image C corresponding to the target phase difference C. The electronic device obtains image a, image B, and image C. The processing order of the target phase difference is not limited.
And 212, synthesizing the images corresponding to the target phase differences to obtain full-focus images.
The full-focus image is an image in which no out-of-focus region is theoretically present. The image stitching refers to stitching several images, wherein the several images can be obtained by focusing at different positions, or can be images corresponding to different phase differences, so as to form a seamless panoramic image or high-resolution image.
Specifically, the electronic device may splice and synthesize a clear portion of the image corresponding to each target phase difference to obtain a full-focus image. Or, the electronic device may perform the image processing according to the image corresponding to each target phase difference in the laplacian pyramid manner to obtain the full-focus image. Alternatively, the electronic device may input the image corresponding to each target phase difference into the convolutional neural network model to synthesize the image, and obtain the full-focus image.
The image processing method in this embodiment obtains a preview image, divides the preview image into at least two sub-areas, obtains a phase difference corresponding to each sub-area in the at least two sub-areas, determines at least two target phase differences from the phase difference corresponding to each sub-area, the at least two target phase differences include a target foreground phase difference and a target background phase difference, performs focusing according to each target phase difference to obtain an image corresponding to each target phase difference, can obtain at least two images in different focuses, one of the images is a background quasi-focus image, and the other is a foreground quasi-focus image, and synthesizes the images according to the image corresponding to each target phase difference to obtain a full quasi-focus image, can obtain images with fewer out-of-focus areas, and improves the definition of the images.
In one embodiment, fig. 3 is a schematic diagram of phase focusing in one embodiment. M1 is the position of the image sensor when the electronic device is in the in-focus state, wherein the in-focus state refers to the successful in-focus state, please refer to fig. 3, when the image sensor is at the position M1, the imaging light rays g reflected by the object W to the Lens in different directions converge on the image sensor, that is, the imaging light rays g reflected by the object W to the Lens in different directions form an image at the same position on the image sensor, and at this time, the image sensor forms a clear image.
M2 and M3 are positions where the image sensor may be located when the electronic device is not in the in-focus state, and as shown in fig. 3, when the image sensor is located at the M2 position or the M3 position, the imaging light rays g reflected by the object W to the Lens in different directions will be imaged at different positions. Referring to fig. 3, when the image sensor is located at the position M2, the imaging light rays g reflected by the object W in different directions toward the Lens are imaged at the position a and the position B, respectively, and when the image sensor is located at the position M3, the imaging light rays g reflected by the object W in different directions toward the Lens are imaged at the position C and the position D, respectively, and at this time, the image sensor is not clear.
In the PDAF technique, the difference in the position of the image formed by the imaging light rays entering the lens from different directions in the image sensor can be obtained, for example, as shown in fig. 3, the difference between the position a and the position B, or the difference between the position C and the position D can be obtained; after acquiring the difference of the positions of images formed by imaging light rays entering the lens from different directions in the image sensor, obtaining an out-of-focus value according to the difference and the geometric relationship between the lens and the image sensor in the camera, wherein the out-of-focus value refers to the distance between the current position of the image sensor and the position where the image sensor is supposed to be in an in-focus state; the electronic device can focus according to the obtained defocus value.
Here, the Difference in the position of an image formed by imaging light rays entering the lens from different directions on the image sensor may be generally referred to as a Phase Difference (Phase Difference). As can be seen from the above description, in the PDAF technology, obtaining the phase difference is a very critical technical link.
It should be noted that in practical applications, the phase difference can be applied to a plurality of different scenes, and the focusing scene is only one possible scene. For example, the phase difference may be applied to the scene of acquiring the depth map, that is, the depth map may be acquired by using the phase difference; for another example, the phase difference may be used in a reconstruction scene of a three-dimensional image, that is, the three-dimensional image may be reconstructed using the phase difference. The embodiment of the present application is directed to provide a method for acquiring a phase difference, and as to which scene the phase difference is applied after the phase difference is acquired, the embodiment of the present application is not particularly limited.
In the related art, some phase detection pixels may be arranged in pairs among the pixels included in the image sensor, please refer to fig. 4. Fig. 4 is a schematic diagram of arranging phase detection pixel points in pairs among pixel points included in an image sensor according to an embodiment. As shown in fig. 4, a phase detection pixel point pair (hereinafter, referred to as a pixel point pair) a, a pixel point pair B, and a pixel point pair C may be provided in the image sensor. In each pixel point pair, one phase detection pixel point performs Left shielding (English), and the other phase detection pixel point performs Right shielding (English).
For the phase detection pixel point which is shielded on the left side, only the light beam on the right side in the imaging light beam which is emitted to the phase detection pixel point can image on the photosensitive part (namely, the part which is not shielded) of the phase detection pixel point, and for the phase detection pixel point which is shielded on the right side, only the light beam on the left side in the imaging light beam which is emitted to the phase detection pixel point can image on the photosensitive part (namely, the part which is not shielded) of the phase detection pixel point. Therefore, the imaging light beam can be divided into a left part and a right part, and the phase difference can be obtained by comparing images formed by the left part and the right part of the imaging light beam. The focusing method in fig. 4 is to obtain the phase difference through the sensor, calculate the defocus Value according to the phase difference, control the lens to move according to the defocus Value, and then find the Focus Value (FV for short) peak.
In one embodiment, determining at least two target phase differences from the phase differences corresponding to each sub-region, where the at least two target phase differences include a target foreground phase difference and a target background phase difference, includes:
and (a1) dividing the phase difference corresponding to each of the at least two sub-regions into a foreground phase difference set and a background phase difference set.
Wherein the foreground phase difference set comprises at least one foreground phase difference. The background phase difference set includes at least one background phase difference.
In particular, a phase difference threshold may be stored in the electronic device. And dividing the phase difference larger than the phase difference threshold value into a background phase difference set, and dividing the phase difference smaller than or equal to the phase difference threshold value into a foreground phase difference set.
In this embodiment, the electronic device calculates a phase difference median according to the phase difference corresponding to each sub-region. And dividing the phase difference larger than the median of the phase difference into a background phase difference set, and dividing the phase difference smaller than or equal to the median of the phase difference into a foreground phase difference set.
And (a2) acquiring a first phase difference mean value corresponding to the foreground phase difference set.
Specifically, the electronic device calculates an average value according to the phase differences in the foreground phase difference set to obtain a first phase difference average value.
And (a3) acquiring a second phase difference mean value corresponding to the background phase difference set.
Specifically, the electronic device calculates an average value according to the phase differences in the background phase difference set to obtain a second phase difference average value.
And (a4) taking the first phase difference mean value as the target foreground phase difference.
Specifically, the electronic device takes the first phase difference mean value as the target foreground phase difference, regardless of whether the first phase difference mean value is the same as any phase difference in the at least two sub-regions.
When the first phase difference mean value is different from any phase difference in the at least two sub-areas, a corresponding first out-of-focus value is obtained through calculation according to the first phase difference mean value, and the lens is controlled to move for a corresponding distance according to the first out-of-focus value, so that an image corresponding to the first phase difference mean value is obtained.
And when the first phase difference average value is the same as any phase difference in the at least two sub-areas, focusing by taking the area corresponding to the first phase difference average value as a focusing area to obtain an image corresponding to the first phase difference average value.
And (a5) taking the second phase difference mean value as the target background phase difference.
Specifically, the electronic device regards the second phase difference mean as the target foreground phase difference, regardless of whether the second phase difference mean is the same as any phase difference in the at least two sub-regions.
And when the second phase difference mean value is different from any phase difference in the at least two sub-regions, calculating to obtain a corresponding second defocusing value according to the second phase difference mean value, and controlling the lens to move a corresponding distance according to the second defocusing value to obtain an image corresponding to the second phase difference mean value.
And when the second phase difference average value is the same as any phase difference in the at least two sub-areas, focusing the area corresponding to the second phase difference average value as a focusing area to obtain an image corresponding to the second phase difference average value.
In the image processing method in this embodiment, the phase difference corresponding to each of the at least two sub-regions is divided into a foreground phase difference set and a background phase difference set, a first phase difference mean value corresponding to the foreground phase difference set is obtained, a second phase difference mean value corresponding to the background phase difference set is obtained, the first phase difference mean value is used as a target foreground phase difference, the second phase difference mean value is used as a target background phase difference, a foreground quasi-focus image and a background quasi-focus image can be obtained according to the mean values, and the definition of the image is improved.
In one embodiment, the image processing method further comprises: eliminating the maximum phase difference in the phase differences corresponding to the subareas to obtain a residual phase difference set; dividing phase differences corresponding to each sub-region of at least two sub-regions into a foreground phase difference set and a background phase difference set, and the method comprises the following steps: and dividing the residual phase difference set into a foreground phase difference set and a background phase difference set.
Specifically, the region corresponding to the largest phase difference in the phase differences corresponding to the sub-regions is the region corresponding to the farthest scene in the preview image. And the maximum phase difference in the phase differences corresponding to the excluded subareas is the phase difference corresponding to the scene which is the farthest scene in the excluded preview image.
The phase difference threshold may be stored in the electronic device. And dividing the phase difference which is greater than the phase difference threshold value in the residual phase difference set into a background phase difference set, and dividing the phase difference which is less than or equal to the phase difference threshold value into a foreground phase difference set.
In this embodiment, the electronic device calculates the phase difference median according to the phase differences in the remaining phase difference set. And dividing the phase difference larger than the median of the phase difference into a background phase difference set, and dividing the phase difference smaller than or equal to the median of the phase difference into a foreground phase difference set.
In this embodiment, the electronic device calculates a phase difference mean value according to the phase differences in the remaining phase difference set. And dividing the phase difference larger than the average phase difference value into a background phase difference set, and dividing the phase difference smaller than or equal to the average phase difference value into a foreground phase difference set.
In the image processing method in this embodiment, the farthest background details are often unimportant, the largest phase difference in the phase differences corresponding to the sub-regions is excluded to obtain the remaining phase difference set, the farthest background can be excluded, the remaining phase difference set is divided into the foreground phase difference set and the background phase difference set, focusing is performed according to the mean value, and the definition of the image can be improved.
In one embodiment, determining at least two target phase differences from the phase differences corresponding to each sub-region, where the at least two target phase differences include a foreground phase difference and a background phase difference, includes: acquiring the maximum phase difference and the minimum phase difference in the phase differences of at least two subregions; taking the minimum phase difference as a foreground phase difference; the maximum phase difference is taken as the background phase difference.
Wherein, the area corresponding to the maximum phase difference is the area corresponding to the farthest scenery. The area corresponding to the minimum phase difference is the area corresponding to the nearest scene. The region corresponding to the minimum phase difference usually has a target subject.
In the image processing method in this embodiment, the maximum phase difference and the minimum phase difference in the phase differences of the at least two sub-regions are obtained, the minimum phase difference is used as the foreground phase difference, the maximum phase difference is used as the background phase difference, only two images can be obtained for synthesis, and the image processing efficiency is improved while the image definition is improved.
In one embodiment, an electronic device includes an image sensor including a plurality of pixel groups arranged in an array, each pixel group including M × N pixels arranged in an array; each pixel point corresponds to a photosensitive unit, wherein M and N are both natural numbers which are greater than or equal to 2.
In this embodiment, fig. 5 is a schematic structural diagram of a part of an electronic device in one embodiment. As shown in fig. 5, the electronic Device may include a lens 502 and an image sensor 504, wherein the lens 502 may be composed of a series of lenses, and the image sensor 504 may be a Metal Oxide Semiconductor (CMOS) image sensor, a Charge-coupled Device (CCD), a quantum thin film sensor, an organic sensor, or the like.
Referring to fig. 6, which shows a schematic structural diagram of a portion of the image sensor 504, as shown in fig. 6, the image sensor 504 may include a plurality of pixel groups Z arranged in an array, where each pixel group Z includes a plurality of pixels D arranged in an array, and each pixel includes a plurality of sub-pixels D arranged in an array. Referring to fig. 6, optionally, each pixel group Z may include 4 pixels D arranged in an array arrangement manner of two rows and two columns, and each pixel may include 4 sub-pixels D arranged in an array arrangement manner of two rows and two columns.
It should be noted that the pixel included in the image sensor 504 refers to a photosensitive unit, and the photosensitive unit may be composed of a plurality of photosensitive elements (i.e., sub-pixels) arranged in an array, where the photosensitive element is an element capable of converting an optical signal into an electrical signal. Optionally, the light sensing unit may further include a microlens, a filter, and the like, where the microlens is disposed on the filter, the filter is disposed on each light sensing element included in the light sensing unit, and the filter may include three types of red, green, and blue, and only can transmit light with wavelengths corresponding to the red, green, and blue, respectively.
Fig. 7 is a schematic structural diagram of a pixel in an embodiment. As shown in fig. 7, taking each pixel point including a sub-pixel point 1, a sub-pixel point 2, a sub-pixel point 3, and a sub-pixel point 4 as an example, the sub-pixel point 1 and the sub-pixel point 2 can be synthesized, and the sub-pixel point 3 and the sub-pixel point 4 are synthesized to form a PD pixel pair in the up-down direction, so as to obtain a phase difference in the vertical direction, and detect a horizontal edge; and synthesizing the sub-pixel point 1 and the sub-pixel point 3, and synthesizing the sub-pixel point 2 and the sub-pixel point 4 to form a PD pixel pair in the left and right directions, so as to obtain a phase difference in the horizontal direction and detect a vertical edge.
Fig. 8 is a schematic diagram showing an internal structure of an image sensor in one embodiment, and an imaging device includes a lens and an image sensor. As shown in fig. 8, the image sensor includes a microlens 80, an optical filter 82, and a photosensitive cell 84. The micro lens 80, the filter 82 and the photosensitive unit 84 are sequentially located on the incident light path, that is, the micro lens 80 is disposed on the filter 82, and the filter 82 is disposed on the photosensitive unit 84.
The filter 82 may include three types of red, green and blue, which only transmit the light with the wavelengths corresponding to the red, green and blue colors, respectively. A filter 82 is disposed on one pixel site.
The micro-lens 80 is used to receive incident light and transmit the incident light to the optical filter 82. The filter 82 smoothes incident light, and then the smoothed light is incident on the light receiving unit 84 on a pixel basis.
The light sensing unit 84 in the image sensor converts light incident from the optical filter 82 into a charge signal by the photoelectric effect, and generates a pixel signal in accordance with the charge signal. The charge signal corresponds to the received light intensity.
As can be seen from the above description, the pixel point included in the image sensor and the pixel included in the image are two different concepts, wherein the pixel included in the image refers to the minimum unit of the image, which is generally represented by a number sequence, and the number sequence can be generally referred to as the pixel value of the pixel. In the embodiment of the present application, both concepts of "pixel points included in an image sensor" and "pixels included in an image" are related, and for the convenience of understanding of readers, the description is briefly made here.
Please refer to fig. 9, which illustrates a schematic diagram of an exemplary pixel group Z, as shown in fig. 9, the pixel group Z includes 4 pixels D arranged in an array arrangement manner of two rows and two columns, wherein a color channel of a pixel in a first row and a first column is green, that is, a color filter included in a pixel in the first row and the first column is a green color filter, a color channel of a pixel in a second row and the second column is red, that is, a color filter included in a pixel in the first row and the second column is a red color filter, a color channel of a pixel in the first column is blue, that is, a color filter included in a pixel in the second row and the first column is a blue color filter, a color channel of a pixel in the second row and the second column is green, that is, a color filter included in a pixel in the second row and the second column is a green color filter.
In one embodiment, an electronic device includes an image sensor including a plurality of pixel groups arranged in an array, each pixel group including a plurality of pixels arranged in an array. As shown in fig. 10, a schematic flow chart of acquiring a phase difference corresponding to each sub-region in an embodiment is shown, and acquiring a phase difference corresponding to each sub-region in at least two sub-regions includes:
step 1002, obtaining a target brightness map according to the brightness values of the pixels included in each pixel group.
The brightness value of the pixel point of the image sensor can be represented by the brightness value of the sub-pixel point included in the pixel point. In other words, the electronic device may obtain the target luminance map according to the luminance values of the sub-pixels in the pixels included in each pixel group. The "brightness value of a sub-pixel" refers to the brightness value of the optical signal received by the sub-pixel.
The image sensor includes a sub-pixel which is a photosensitive element capable of converting an optical signal into an electrical signal. Therefore, the electronic device can obtain the intensity of the optical signal received by the sub-pixel point according to the electric signal output by the sub-pixel point, and the brightness value of the sub-pixel point can be obtained according to the intensity of the optical signal received by the sub-pixel point.
The target brightness map in the embodiment of the application is used for reflecting the brightness value of the sub-pixel in the image sensor, and the target brightness map may include a plurality of pixels, wherein the pixel value of each pixel in the target brightness map is obtained according to the brightness value of the sub-pixel in the image sensor.
And 1004, performing segmentation processing on the target brightness map, and obtaining a first segmentation brightness map and a second segmentation brightness map according to the segmentation processing result.
Specifically, the electronic device may perform the slicing process on the target luminance map in the column direction (y-axis direction in the image coordinate system). The first and second sliced luminance maps obtained by slicing the target luminance map in the column direction may be referred to as left and right maps, respectively.
In this embodiment, the electronic device may perform a slicing process on the target luminance map along a row direction (x-axis direction in the image coordinate system). The first and second sliced luminance maps obtained by slicing the target luminance map in the row direction may be referred to as an upper map and a lower map, respectively.
Step 1006, determining a phase difference of the matched pixels according to a position difference of the matched pixels in the first and second sliced luminance graphs.
Specifically, taking as an example that the target luminance map is subjected to the segmentation process in the row direction (x-axis direction in the image coordinate system), the obtained first luminance segmentation map and second luminance segmentation map are an upper map and a lower map. The electronic device obtains the vertical phase difference according to the position difference of the mutually matched pixels in the first luminance segmentation graph and the second luminance segmentation graph.
Taking the example of performing the segmentation process on the target luminance map in the column direction (y-axis direction in the image coordinate system), the obtained first luminance segmentation map and second luminance segmentation map are left and right maps. The electronic device obtains the horizontal phase difference according to the position difference of the mutually matched pixels in the first luminance segmentation map and the second luminance segmentation map.
Here, "pixels matched with each other" means that pixel matrices composed of the pixels themselves and their surrounding pixels are similar to each other. For example, pixel a and its surrounding pixels in the first tangential luminance map form a pixel matrix with 3 rows and 3 columns, and the pixel values of the pixel matrix are:
2 10 90
1 20 80
0 100 1
the pixel b and its surrounding pixels in the second sliced luminance graph also form a pixel matrix with 3 rows and 3 columns, and the pixel values of the pixel matrix are:
1 10 90
1 21 80
0 100 2
as can be seen from the above, the two matrices are similar, and pixel a and pixel b can be considered to match each other. As for how to judge whether pixel matrixes are similar, there are many different methods in practical application, and a common method is to calculate the difference of pixel values of pixels corresponding to each of two pixel matrixes, add the absolute values of the calculated difference values, and judge whether the pixel matrixes are similar by using the result of the addition, that is, if the result of the addition is smaller than a preset threshold, the pixel matrixes are considered to be similar, otherwise, the pixel matrixes are considered to be dissimilar.
For example, for the two pixel matrices of 3 rows and 3 columns, 1 and 2 are subtracted, 10 and 10 are subtracted, 90 and 90 are subtracted, … … are added, and the absolute values of the obtained differences are added to obtain an addition result of 3, and if the addition result 3 is smaller than a preset threshold, the two pixel matrices of 3 rows and 3 columns are considered to be similar.
Another common method for judging whether pixel matrices are similar is to extract edge features thereof by using a sobel convolution kernel calculation mode or a high laplacian calculation mode, and the like, and judge whether pixel matrices are similar through the edge features.
In the present embodiment, the "positional difference of mutually matched pixels" refers to a difference between the position of a pixel located in the first sliced luminance graph and the position of a pixel located in the second sliced luminance graph among mutually matched pixels. As exemplified above, the positional difference of the pixel a and the pixel b that match each other refers to the difference in the position of the pixel a in the first sliced luminance graph and the position of the pixel b in the second sliced luminance graph.
The pixels matched with each other respectively correspond to different images formed in the image sensor by imaging light rays entering the lens from different directions. For example, a pixel a in the first sliced luminance graph and a pixel B in the second sliced luminance graph match each other, where the pixel a may correspond to the image formed at the a position in fig. 1 and the pixel B may correspond to the image formed at the B position in fig. 1.
Since the matched pixels respectively correspond to different images formed by imaging light rays entering the lens from different directions in the image sensor, the phase difference of the matched pixels can be determined according to the position difference of the matched pixels.
And step 1008, determining a phase difference corresponding to each of the at least two sub-areas according to the phase differences of the mutually matched pixels.
Specifically, the electronic device determines a phase difference corresponding to each of the at least two sub-areas according to the phase differences matched with each other.
In this embodiment, the electronic device may obtain two phase differences corresponding to each sub-region according to the phase difference of the pixels matched with each other, where the two phase differences are a vertical phase difference and a horizontal phase difference. The electronic device can obtain the confidence coefficient of the vertical phase difference and the confidence coefficient of the horizontal phase difference corresponding to each sub-region, determine the phase difference with the highest confidence coefficient, and take the phase difference as a phase difference corresponding to each sub-region.
In the image processing method in this embodiment, the target luminance map is obtained according to the luminance values of the pixels included in each pixel group in the image sensor, after the target luminance map is obtained, the target luminance map is sliced, the first sliced luminance map and the second sliced luminance map are obtained according to the result of the slicing, then, the phase difference of the pixels matched with each other is determined according to the position difference of the pixels matched with each other in the first sliced luminance map and the second sliced luminance map, and then, the phase difference corresponding to each of the at least two sub-regions is determined according to the phase difference of the pixels matched with each other, so that the target phase difference map can be obtained by using the luminance values of the pixels included in each pixel group in the image sensor, therefore, compared with the mode of obtaining the phase difference by using the phase detection pixels which are sparsely arranged, in the embodiment of the application, the phase difference of the mutually matched pixels contains relatively abundant phase difference information, so that the accuracy of the obtained phase difference can be improved, the phase difference with high accuracy corresponding to a focusing area can be obtained when focusing, a focusing peak value can not be searched, the focusing efficiency is improved, and the full-focusing image synthesis efficiency is improved.
In one embodiment, the segmenting the target luminance graph, and obtaining a first segmented luminance graph and a second segmented luminance graph according to the result of the segmenting process includes:
and (b1) performing segmentation processing on the target brightness map to obtain a plurality of brightness map regions, wherein each brightness map region comprises a row of pixels in the target brightness map, or each brightness map region comprises a column of pixels in the target brightness map.
Wherein each luminance map region comprises a row of pixels in the target luminance map. Alternatively, each luminance map region includes a column of pixels in the target luminance map.
In this embodiment, the electronic device may perform column-by-column segmentation on the target luminance graph along the row direction to obtain a plurality of pixel columns of the target luminance graph.
In this embodiment, the electronic device may perform line-by-line segmentation on the target luminance graph along the column direction to obtain a plurality of pixel rows of the target luminance graph.
And (b2) acquiring a plurality of first luminance map areas and a plurality of second luminance map areas from the plurality of luminance map areas, wherein the first luminance map areas comprise pixels in even rows in the target luminance map, or the first luminance map areas comprise pixels in even columns in the target luminance map, and the second luminance map areas comprise pixels in odd rows in the target luminance map, or the second luminance map areas comprise pixels in odd columns in the target luminance map.
Wherein the first luminance map region includes pixels of even-numbered lines in the target luminance map. Alternatively, the first luminance map region includes pixels of even-numbered columns in the target luminance map.
The second luminance map region includes pixels of odd-numbered lines in the target luminance map. Alternatively, the second luminance map region includes pixels of odd columns in the target luminance map.
In this embodiment, in the case of performing column-by-column segmentation on the target luminance map, the electronic device may determine even-numbered columns as the first luminance map region and determine odd-numbered columns as the second luminance map region.
In this embodiment, in the case of performing line-by-line segmentation on the target luminance map, the electronic device may determine even lines as the first luminance map region and determine odd lines as the second luminance map region.
And (b3) forming a first segmentation luminance map by using the plurality of first luminance map regions and forming a second segmentation luminance map by using the plurality of second luminance map regions.
In this embodiment, fig. 11 is a schematic diagram illustrating a target luminance graph is subjected to a slicing process in a first direction in one embodiment. Fig. 12 is a diagram illustrating a slicing process performed on a target luminance graph in a second direction according to an embodiment. As shown in fig. 11, assuming that the target luminance map includes 6 rows and 6 columns of pixels, in the case of performing column-by-column slicing of the target luminance map, the slicing process is performed on the target luminance map in the first direction. The electronic device may determine the 1 st, 3 rd and 5 th columns of pixels of the target luminance map as the first luminance map region, and may determine the 2 nd, 4 th and 6 th columns of pixels of the target luminance map as the second luminance map region. Then, the electronic device may stitch the first luminance map regions to obtain a first cut-luminance map T1, where the first cut-luminance map T1 includes the 1 st, 3 rd, and 5 th columns of pixels of the target luminance map. The electronic device may stitch the second luminance map regions to obtain a second sliced luminance map T2, where the second sliced luminance map T2 includes the 2 nd, 4 th, and 6 th columns of pixels of the target luminance map.
As shown in fig. 12, assuming that the target luminance map includes 6 rows and 6 columns of pixels, the target luminance map is subjected to the slicing process in the second direction in the case of slicing the target luminance map row by row. The electronic device may determine the pixels in the 1 st row, the pixels in the 3 rd row, and the pixels in the 5 th row of the target luminance map as a first luminance map region, may determine the pixels in the 2 nd row, the pixels in the 4 th row, and the pixels in the 6 th row of the target luminance map as a second luminance map region, and then, the electronic device may stitch the first luminance map regions to obtain a first cut-off luminance map T3, where the first cut-off luminance map T3 includes the pixels in the 1 st row, the pixels in the 3 rd row, and the pixels in the 5 th row of the target luminance map. The electronic device may stitch the second luminance map regions to obtain a second sliced luminance map T4, where the second sliced luminance map T4 includes the 2 nd, 4 th, and 6 th rows of pixels of the target luminance map.
According to the image processing method in the embodiment of the application, the phase difference is obtained without shielding the pixel points, relatively abundant phase difference information is obtained in a brightness segmentation mode, and the accuracy of the obtained phase difference is improved.
In one embodiment, an electronic device includes an image sensor including a plurality of pixel groups arranged in an array, each pixel group including M × N pixels arranged in an array; each pixel point corresponds to a photosensitive unit, wherein M and N are both natural numbers which are greater than or equal to 2. The phase difference corresponding to each sub-region comprises a horizontal phase difference and a vertical phase difference. Acquiring a phase difference corresponding to each of at least two subregions, including: when the horizontal lines are detected to be contained in the sub-regions, taking the vertical phase difference as the phase difference corresponding to the sub-regions; and when detecting that the horizontal line is not contained in the subarea, taking the horizontal phase difference as the phase difference corresponding to the subarea.
In particular, due to some characteristics of the camera, there is a phase difference when imaging. I.e. lines may have problems with smearing etc. When the horizontal lines are detected to be contained in the sub-regions, the vertical phase difference is adopted as the phase difference corresponding to the sub-regions; when the vertical lines are detected to be contained in the sub-regions, the horizontal phase difference is adopted as the phase difference corresponding to the sub-regions, so that the phase difference acquisition precision can be improved, and the image definition is improved.
In one embodiment, focusing according to each target phase difference to obtain an image corresponding to each target phase difference includes: and taking the sub-area corresponding to each target phase difference as a focusing area to obtain an image corresponding to each target phase difference.
Specifically, the electronic device takes a sub-area corresponding to each target phase difference of the at least two target phase differences as a focusing area, and obtains an image corresponding to each target phase difference. For example, the at least two target phase differences include a target phase difference a, a target phase difference B, and a target phase difference C. Then, the sub-area corresponding to the target phase difference a is used as a focusing area for focusing, and an image corresponding to the target phase difference a is obtained. And taking the sub-area corresponding to the target phase difference B as a focusing area, focusing and acquiring the image corresponding to the target phase difference B. And taking the sub-area corresponding to the target phase difference C as a focusing area, focusing and acquiring the image corresponding to the target phase difference C. I.e. a total of three images are obtained.
In the image processing method in this embodiment, the sub-area corresponding to each target phase difference is used as the focusing area, so as to obtain the image corresponding to each target phase difference, and obtain images with different focuses to synthesize, thereby improving the definition of the image.
In one embodiment, synthesizing the images corresponding to each target phase difference to obtain a full-focus image includes: dividing the image corresponding to each target phase difference into the sub-image areas with the same number; acquiring the definition corresponding to each sub-image area; determining the sub-image area with the highest definition in the matched sub-image areas according to the definition corresponding to each sub-image area; and splicing and synthesizing the sub-image areas with the highest definition to obtain a full-focus image.
The sub-image areas matched with each other refer to sub-image areas located at the same position in different images.
Specifically, the electronic device divides the image corresponding to each target phase difference into the same number of sub-image areas. The electronic equipment acquires the definition corresponding to each sub-image area in the image corresponding to each target phase difference. And the electronic equipment determines the sub-image area with the highest definition in the matched sub-image areas according to the definition corresponding to each sub-image area of each sub-image. And the electronic equipment synthesizes all sub-image areas with the highest definition to obtain a full-focus image. For example, the target phase difference a corresponds to an image a, which is divided into a sub-image area 1, a sub-image area 2, a sub-image area 3, and a sub-image area 4. The target phase difference B corresponds to an image B, which is divided into a sub-image area α, a sub-image area β, a sub-image area γ, and a sub-image area δ. Where sub-image area 1 is in the upper left corner of image a and sub-image area alpha is in the upper left corner of image B, sub-image area 1 matches sub-image area alpha …, and so on. And if the sub-image area 1, the sub-image area beta, the sub-image area gamma and the sub-image area 4 with the highest definition are adopted, the electronic equipment splices and synthesizes the sub-image area 1, the sub-image area beta, the sub-image area gamma and the sub-image area 4 to obtain a full quasi-focus image.
According to the image processing method in the embodiment of the application, images corresponding to each target phase difference are divided into the sub-image areas with the same number; acquiring the definition corresponding to each sub-image area; determining the sub-image area with the highest definition in the matched sub-image areas according to the definition corresponding to each sub-image area; and synthesizing the sub-image region with the highest definition to obtain a full-focus image, so that the full-focus image can be quickly obtained, and the image processing efficiency is improved.
In one embodiment, as shown in fig. 13, a schematic flow chart of the synthesis to obtain the full-confocal image in one embodiment is shown. Synthesizing images corresponding to each target phase difference to obtain a full-focus image, wherein the synthesizing process comprises the following steps:
step 1302, performing convolution and sampling processing on the image corresponding to each target phase difference, and obtaining a gaussian pyramid of the image corresponding to each target phase difference when a preset iteration condition is met.
The Gaussian pyramid is an image pyramid, and except for the bottom layer image, other layers of images are obtained by performing convolution and sampling on the previous layer of image. A gaussian pyramid can be used to obtain the low frequency image. Wherein the low frequency image may refer to a contour image in the image. The iteration condition may mean that a preset number of times is reached or a preset time is reached, etc., but is not limited thereto. And each image corresponding to the target phase difference has a corresponding Gaussian pyramid. For example, the image a corresponding to the target phase difference corresponds to a gaussian pyramid a, and the image B corresponding to the target phase difference corresponds to a gaussian pyramid B.
Specifically, the electronic device convolves the image corresponding to each target phase difference by using a gaussian kernel, and samples the convolved image to obtain each layer of image. That is, an image (denoted as G) corresponding to each target phase difference is displayed0) Convolution and sampling are carried out to obtain a low-frequency image (G) of the previous layer1) (ii) a For image G1The convolution and the sampling are performed such that,obtaining an image G2… until a predetermined iteration condition is satisfied, e.g. 5 iterations, resulting in an image G5And obtaining a Gaussian pyramid which comprises a plurality of low-frequency images and corresponds to each target phase difference.
Step 1304, processing is performed according to each layer of image in the gaussian pyramid of the image corresponding to each target phase difference, so as to obtain a laplacian pyramid of the image corresponding to each target phase difference.
In the operation process of the Gaussian pyramid, partial high-frequency detail information of the image is lost through convolution and downsampling operations. To describe these high frequency information, a Laplacian Pyramid (LP) is defined. And each image corresponding to the target phase difference has a corresponding Laplacian pyramid. For example, the image a corresponding to the target phase difference corresponds to the laplacian pyramid a, and the image B corresponding to the target phase difference corresponds to the laplacian pyramid B. Each layer of the laplacian pyramid represents a different scale and detail. Wherein the details may be considered as frequency.
Specifically, the electronic device subtracts the upsampled low-frequency image from the original image to obtain a high-frequency image. The formula can be L0=I0-G1Wherein L is0The lowest layer of the laplacian pyramid. Whereby L can be obtained1、L2、L3、L4…, a laplacian pyramid of the image corresponding to each target phase difference is obtained.
And 1306, fusing the laplacian pyramids of the images corresponding to each target phase difference to obtain fused laplacian pyramids.
Wherein, only one Laplacian pyramid is needed after fusion.
Specifically, the electronic device obtains the weight of the image corresponding to each target phase difference, and performs fusion according to the weight of the image corresponding to each target phase difference and the laplacian pyramid of the image corresponding to each target phase difference to obtain a fused laplacian pyramid.
For example, the fusion equation is as follows:
L5(fusion) Weight (fig. 1) L5(FIG. 1) + Weight (FIG. 2). L5(FIG. 2)
Wherein L is5The (fusion) is the sixth layer image of the fused laplacian pyramid counted from the bottom layer up. Weight (fig. 1) refers to the Weight of fig. 1. Weight (fig. 2) refers to the Weight of fig. 2. L is5(fig. 1) is a sixth layer image of the laplacian pyramid of fig. 1, counted from the bottom layer up. L is5(fig. 2) is a sixth layer image of the laplacian pyramid of fig. 2 counted from the bottom layer up.
The weight of each image can be adjusted according to parameters such as depth of field and degree of blur. For example, a region with a low degree of blurring is significant. The region with a high degree of blurring is weighted less. Regions with small depth of field are of great importance. Regions with large depth of field have small weights.
And step 1308, performing reconstruction processing according to the fused laplacian pyramid to obtain a full-focus image.
Specifically, the electronic device merges from the top layer to the bottom layer. The electronic device can process, namely reconstruct, the fused top-level images in the laplacian pyramid and the gaussian pyramid to obtain a full-focus image. For example, R5(fusion)=L5(fusion)+G5
Wherein G is5G of Gaussian pyramid corresponding to each target phase difference can pass5The layer images are fused (fuse). L is5(fusion) is the L5 layer of the fused laplacian pyramid. R5(fusion) is the G5 layer of the Laplacian pyramid after Reconstruction (Reconstruction).
Then R is put5(fusion) upsampling, then R4=R5(upsampling) + L4(fusion)。
Wherein L is4(fusion) is the L4 layer of the fused laplacian pyramid. By analogy, the final synthesis result R can be obtained0Namely, the full quasi-focus image is obtained.
The image processing method in the embodiment includes the steps of performing convolution and sampling processing on images corresponding to each target phase difference, obtaining a Gaussian pyramid of the image corresponding to each target phase difference when a preset iteration condition is met, processing according to each layer of image in the Gaussian pyramid of the image corresponding to each target phase difference to obtain a Laplacian pyramid of the image corresponding to each target phase difference, fusing the Laplacian pyramids of the images corresponding to each target phase difference to obtain a fused Laplacian pyramid, performing reconstruction processing according to the fused Laplacian pyramid to obtain a full-quasi-focus image, and performing image synthesis according to low-frequency contours and high-frequency detail parts to enable boundaries among the regions to be more natural, and improve authenticity and definition of the images.
In one embodiment, as shown in fig. 14, a schematic flow chart of the synthesis of the full-confocal image in another embodiment is shown. Synthesizing images corresponding to each target phase difference to obtain a full-focus image, wherein the synthesizing process comprises the following steps:
step 1402, extracting features of the image corresponding to each target phase difference.
Specifically, fig. 15 is a schematic flow chart of synthesizing a full-confocal image in yet another embodiment. And the electronic equipment performs convolution processing on the image corresponding to each target phase difference by adopting a convolution neural network, and performs characteristic extraction. For example, in fig. 15, the image 1 is convolved → the feature extraction → the convolution. Convolution is performed on image 2 → feature extraction → convolution.
Step 1404, fusing the features of the image corresponding to each target phase difference to obtain a first image feature.
Specifically, the electronic device fuses features of images corresponding to each target phase difference, and obtains first image features through activation function calculation.
Step 1406, the images corresponding to each target phase difference are averaged to obtain an average image.
Specifically, the electronic device performs an averaging process on the luminance value of the image corresponding to each target phase difference to obtain an average image.
Step 1408, extracting features according to the average image and the first image features to obtain second image features.
Specifically, the electronic device performs feature extraction according to the average image and the first image feature to obtain a second image feature.
And step 1410, performing feature reconstruction according to the second image features and the average image to obtain a full-focus image.
Specifically, the electronic device performs feature reconstruction according to the second image features and the average image to obtain a full-focus image.
The image processing method in the embodiment of the application extracts the features of the images corresponding to each target phase difference, fuses the features of the images corresponding to each target phase difference to obtain first image features, performs average processing on the images corresponding to each target phase difference to obtain average images, performs feature extraction according to the average images and the first image features to obtain second image features, performs feature reconstruction according to the second image features and the average images to obtain full-focus-aligned images, can synthesize the images in a neural network mode, and improves the accuracy and definition of image synthesis.
In one embodiment, acquiring a preview image includes: acquiring a region of interest in a preview image; dividing the preview image into at least two sub-regions, including: the region of interest in the preview image is divided into at least two sub-regions.
The region of interest (ROI) is a region that needs to be processed and is delineated from a processed image in a frame, circle, ellipse, irregular polygon, or the like in image processing. The region of interest may contain a background as well as objects. The electronic equipment receives a trigger instruction of the first preview image, and acquires the region of interest selected by the user according to the trigger instruction.
In particular, the electronic device divides the region of interest into at least two sub-regions. The electronic device may divide the region of interest selected by the user into N × N sub-regions. Alternatively, the electronic device may divide the region of interest selected by the user into N × M sub-regions, and the like, without being limited thereto. N and M are both positive integers.
According to the image processing method in the embodiment of the application, the region of interest in the preview image is obtained, the region of interest in the preview image is divided into at least two sub-regions, focusing can be performed according to the region of interest, accordingly, the scene in the region of interest is guaranteed to be clear, and the image definition of the region of interest in the full-focus image is improved.
In one embodiment, determining at least two target phase differences from the phase difference corresponding to each sub-region comprises: acquiring a scene mode; at least two target phase differences are determined from the scene mode.
In particular, each scene mode may correspond to a different type of target phase difference. The scene mode may be a night scene mode, a panorama mode, etc., but is not limited thereto. For example, the target phase difference corresponding to the a scene mode is two of a foreground phase difference and a background phase difference. The target phase difference corresponding to the B scene mode is not limited to the foreground phase difference, the median phase difference, the background phase difference, and the like.
According to the image processing method in the embodiment of the application, the scene mode is obtained, the at least two target phase differences are determined according to the scene mode, the target phase differences can be rapidly determined according to different scene modes, the corresponding effects of different scenes are achieved, and the image processing efficiency and the definition of the image effect are improved.
It should be understood that although the various steps in the flowcharts of fig. 2, 10, 13 and 14 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 2, 10, 13, and 14 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performing the sub-steps or stages is not necessarily sequential, but may be performed alternately or alternatingly with other steps or at least some of the sub-steps or stages of other steps.
Fig. 16 is a block diagram showing the configuration of an image processing apparatus according to an embodiment. As shown in fig. 16, an image processing apparatus includes a preview image acquisition module 1602, a dividing module 1604, a phase difference acquisition module 1606, a focusing module 1608, and a synthesizing module 1610, wherein:
a preview image obtaining module 1602, configured to obtain a preview image.
A dividing module 1604 for dividing the preview image into at least two sub-regions.
A phase difference obtaining module 1606, configured to obtain a phase difference corresponding to each of the at least two sub-regions.
The phase difference obtaining module 1606 is further configured to determine at least two target phase differences from the phase difference corresponding to each sub-region, where the at least two target phase differences include a target foreground phase difference and a target background phase difference.
The focusing module 1608 is configured to perform focusing according to each target phase difference to obtain an image corresponding to each target phase difference.
And a synthesizing module 1610, configured to synthesize an image corresponding to each target phase difference to obtain a full-focus image.
The image processing apparatus in this embodiment obtains a preview image, divides the preview image into at least two sub-regions, obtains a phase difference corresponding to each sub-region in the at least two sub-regions, determines at least two target phase differences from the phase difference corresponding to each sub-region, where the at least two target phase differences include a target foreground phase difference and a target background phase difference, performs focusing according to each target phase difference to obtain an image corresponding to each target phase difference, and can obtain at least two images in different focuses, where one is a background quasi-focus image and one is a foreground quasi-focus image, and the images corresponding to each target phase difference are synthesized to obtain a full quasi-focus image, so that images with fewer out-of-focus regions can be obtained, and the definition of the images is improved.
In one embodiment, the phase difference obtaining module 1606 is configured to divide a phase difference corresponding to each of the at least two sub-regions into a foreground phase difference set and a background phase difference set; acquiring a first phase difference mean value corresponding to the foreground phase difference set; acquiring a second phase difference mean value corresponding to the background phase difference set; taking the first phase difference mean value as a target foreground phase difference; and taking the second phase difference mean value as the target background phase difference.
In the image processing apparatus in this embodiment, a phase difference corresponding to each of at least two sub-regions is divided into a foreground phase difference set and a background phase difference set, a first phase difference mean value corresponding to the foreground phase difference set is obtained, a second phase difference mean value corresponding to the background phase difference set is obtained, the first phase difference mean value is used as a target foreground phase difference, the second phase difference mean value is used as a target background phase difference, a foreground quasi-focus image and a background quasi-focus image can be obtained according to the mean values, and the definition of the image is improved.
In an embodiment, the phase difference obtaining module 1606 is configured to exclude a maximum phase difference from phase differences corresponding to the sub-regions, so as to obtain a remaining phase difference set; and dividing the residual phase difference set into a foreground phase difference set and a background phase difference set.
In the image processing apparatus in this embodiment, because the farthest background details are often unimportant, the largest phase difference in the phase differences corresponding to the sub-regions is excluded to obtain the remaining phase difference set, the farthest background can be excluded, the remaining phase difference set is divided into the foreground phase difference set and the background phase difference set, focusing is performed according to the mean value, and the sharpness of the image can be improved.
In one embodiment, the phase difference obtaining module 1606 is configured to obtain a maximum phase difference and a minimum phase difference of the phase differences of the at least two sub-regions; taking the minimum phase difference as a foreground phase difference; the maximum phase difference is taken as the background phase difference.
The image processing apparatus in this embodiment can acquire only two images and combine the two images by acquiring the maximum phase difference and the minimum phase difference among the phase differences of at least two sub-regions, taking the minimum phase difference as a foreground phase difference and the maximum phase difference as a background phase difference, thereby improving the image definition and improving the image processing efficiency.
In one embodiment, an electronic device includes an image sensor including a plurality of pixel groups arranged in an array, each pixel group including M × N pixels arranged in an array; each pixel point corresponds to a photosensitive unit, wherein M and N are both natural numbers which are greater than or equal to 2. The phase difference obtaining module 1606 is configured to obtain a target luminance map according to the luminance values of the pixels included in each pixel group; carrying out segmentation processing on the target brightness graph, and obtaining a first segmentation brightness graph and a second segmentation brightness graph according to the segmentation processing result; determining the phase difference of the matched pixels according to the position difference of the matched pixels in the first segmentation luminance graph and the second segmentation luminance graph; and determining the phase difference corresponding to each subarea in the at least two subareas according to the phase differences of the mutually matched pixels.
In the image processing apparatus in this embodiment, the target luminance map is obtained according to the luminance values of the pixels included in each pixel group in the image sensor, after the target luminance map is obtained, the target luminance map is sliced, the first sliced luminance map and the second sliced luminance map are obtained according to the result of the slicing, then, the phase difference of the pixels matched with each other is determined according to the position difference of the pixels matched with each other in the first sliced luminance map and the second sliced luminance map, and then, the phase difference corresponding to each of the at least two sub-regions is determined according to the phase difference of the pixels matched with each other, so that the target phase difference map can be obtained by using the luminance values of the pixels included in each pixel group in the image sensor, and therefore, compared with a method of obtaining the phase difference by using sparsely arranged phase detection pixels, in the embodiment of the application, the phase difference of the pixels matched with each other contains relatively abundant phase difference information, so that the accuracy of the obtained phase difference can be improved.
In an embodiment, the phase difference obtaining module 1606 is configured to perform segmentation processing on the target luminance map to obtain a plurality of luminance map regions, where each luminance map region includes a row of pixels in the target luminance map, or each luminance map region includes a column of pixels in the target luminance map; acquiring a plurality of first brightness map areas and a plurality of second brightness map areas from the plurality of brightness map areas, wherein the first brightness map areas comprise pixels in even rows in the target brightness map, or the first brightness map areas comprise pixels in even columns in the target brightness map, and the second brightness map areas comprise pixels in odd rows in the target brightness map, or the second brightness map areas comprise pixels in odd columns in the target brightness map; the first segmentation luminance map is composed of a plurality of first luminance map regions, and the second segmentation luminance map is composed of a plurality of second luminance map regions.
The image processing device in the embodiment of the application does not need to shield pixel points to obtain the phase difference, relatively abundant phase difference information is obtained in a brightness segmentation mode, and the accuracy of the obtained phase difference is improved.
In one embodiment, the phase difference obtaining module 1606 is configured to, when it is detected that a horizontal line is included in the sub-region, take the vertical phase difference as a phase difference corresponding to the sub-region; and when detecting that the horizontal line is not contained in the subarea, taking the horizontal phase difference as the phase difference corresponding to the subarea.
The image processing device in the embodiment of the application has a phase difference during imaging due to some characteristics of the camera. I.e. lines may have problems with smearing etc. When the horizontal lines are detected to be contained in the sub-regions, the vertical phase difference is adopted as the phase difference corresponding to the sub-regions; when the vertical lines are detected to be contained in the sub-regions, the horizontal phase difference is adopted as the phase difference corresponding to the sub-regions, so that the phase difference acquisition precision can be improved, and the image definition is improved.
In one embodiment, the focusing module 1608 is configured to use the sub-area corresponding to each target phase difference as a focusing area, and obtain an image corresponding to each target phase difference.
The image processing apparatus in this embodiment obtains an image corresponding to each target phase difference by using the sub-area corresponding to each target phase difference as a focusing area, and can obtain images of different focuses to synthesize the images, thereby improving the sharpness of the images.
In one embodiment, the composition module 1610 is configured to divide the image corresponding to each target phase difference into the same number of sub-image regions; acquiring the definition corresponding to each sub-image area; determining the sub-image area with the highest definition in the matched sub-image areas according to the definition corresponding to each sub-image area; and synthesizing the sub-image areas with the highest definition to obtain a full-focus image.
The image processing device in the embodiment of the application divides the image corresponding to each target phase difference into the sub-image areas with the same number; acquiring the definition corresponding to each sub-image area; determining the sub-image area with the highest definition in the matched sub-image areas according to the definition corresponding to each sub-image area; and synthesizing the sub-image region with the highest definition to obtain a full-focus image, so that the full-focus image can be quickly obtained, and the image processing efficiency is improved.
In one embodiment, the synthesis module 1610 is configured to perform convolution and sampling processing on an image corresponding to each target phase difference, and when a preset iteration condition is met, obtain a gaussian pyramid of the image corresponding to each target phase difference; processing each layer of image in the Gaussian pyramid of the image corresponding to each target phase difference to obtain a Laplacian pyramid of the image corresponding to each target phase difference; fusing the Laplacian pyramid of the image corresponding to each target phase difference to obtain a fused Laplacian pyramid; and carrying out reconstruction processing according to the fused Laplacian pyramid to obtain a full-focus image.
The image processing apparatus in this embodiment performs convolution and sampling processing on an image corresponding to each target phase difference, obtains a gaussian pyramid of an image corresponding to each target phase difference when a preset iteration condition is satisfied, performs processing according to each layer of image in the gaussian pyramid of the image corresponding to each target phase difference to obtain a laplacian pyramid of the image corresponding to each target phase difference, fuses the laplacian pyramids of the images corresponding to each target phase difference to obtain a fused laplacian pyramid, and performs reconstruction processing according to the fused laplacian pyramid to obtain a full-focus image, so that boundaries between regions can be more natural, and authenticity and definition of the image are improved.
In one embodiment, the synthesis module 1610 is configured to extract features of the image corresponding to each target phase difference; fusing the characteristics of the image corresponding to each target phase difference to obtain first image characteristics; averaging the images corresponding to each target phase difference to obtain an average image; performing feature extraction according to the average image and the first image feature to obtain a second image feature; and performing characteristic reconstruction according to the second image characteristic and the average image to obtain a full-focus image.
The image processing device in the embodiment of the application extracts the features of the images corresponding to each target phase difference, fuses the features of the images corresponding to each target phase difference to obtain first image features, performs average processing on the images corresponding to each target phase difference to obtain an average image, performs feature extraction according to the average image and the first image features to obtain second image features, performs feature reconstruction according to the second image features and the average image to obtain a full-focus-aligned image, can synthesize the images in a neural network mode, and improves the accuracy and definition of image synthesis.
In one embodiment, the preview image capture module 1602 is configured to capture a region of interest in a preview image. The dividing module 1604 is configured to divide the region of interest in the preview image into at least two sub-regions.
The image processing device in the embodiment of the application acquires the region of interest in the preview image, divides the region of interest in the preview image into at least two sub-regions and can focus according to the region of interest, so that the scene in the region of interest is clear, and the image definition of the region of interest in the full-focus image is improved.
In one embodiment, the phase difference acquisition module 1606 is configured to acquire a scene mode; at least two target phase differences are determined from the scene mode.
The image processing device in the embodiment of the application acquires the scene mode, determines the at least two target phase differences according to the scene mode, can quickly determine the target phase differences according to different scene modes, achieves the corresponding effects of different scenes, and improves the image processing efficiency and the definition of the image effect.
The division of the modules in the image processing apparatus is only for illustration, and in other embodiments, the image processing apparatus may be divided into different modules as needed to complete all or part of the functions of the image processing apparatus.
For specific limitations of the image processing apparatus, reference may be made to the above limitations of the image processing method, which are not described herein again. The respective modules in the image processing apparatus described above may be wholly or partially implemented by software, hardware, and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
Fig. 17 is a schematic diagram of an internal structure of an electronic device in one embodiment. As shown in fig. 17, the electronic apparatus includes a processor and a memory connected by a system bus. Wherein, the processor is used for providing calculation and control capability and supporting the operation of the whole electronic equipment. The memory may include a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The computer program can be executed by a processor to implement an image processing method provided in the following embodiments. The internal memory provides a cached execution environment for the operating system computer programs in the non-volatile storage medium. The electronic device may be a mobile phone, a tablet computer, or a personal digital assistant or a wearable device, etc.
The implementation of each module in the image processing apparatus provided in the embodiment of the present application may be in the form of a computer program. The computer program may be run on a terminal or a server. The program modules constituted by the computer program may be stored on the memory of the terminal or the server. Which when executed by a processor, performs the steps of the method described in the embodiments of the present application.
The embodiment of the application also provides a computer readable storage medium. One or more non-transitory computer-readable storage media containing computer-executable instructions that, when executed by one or more processors, cause the processors to perform the steps of the image processing method.
A computer program product comprising instructions which, when run on a computer, cause the computer to perform an image processing method.
Any reference to memory, storage, database, or other medium used herein may include non-volatile and/or volatile memory. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM), which acts as external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms, such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), Enhanced SDRAM (ESDRAM), synchronous Link (Synchlink) DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and bus dynamic RAM (RDRAM).
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (15)

1. An image processing method applied to an electronic device includes:
acquiring a preview image;
dividing the preview image into at least two sub-regions;
acquiring a phase difference corresponding to each of the at least two sub-regions, including: acquiring a first phase difference and a second phase difference corresponding to the same sub-region, and determining the phase difference corresponding to each sub-region based on the first phase difference and the second phase difference corresponding to the same sub-region, wherein a first direction corresponding to the first phase difference and a second direction corresponding to the second phase difference form a preset included angle;
determining at least two target phase differences from the phase differences corresponding to each sub-region, wherein the at least two target phase differences comprise a target foreground phase difference and a target background phase difference;
focusing according to each target phase difference to obtain an image corresponding to each target phase difference;
and synthesizing the images corresponding to the target phase differences to obtain a full-focus image.
2. The method according to claim 1, wherein the determining at least two target phase differences from the phase differences corresponding to each sub-region, wherein the at least two target phase differences include a target foreground phase difference and a target background phase difference, comprises:
dividing the phase difference corresponding to each sub-area in the at least two sub-areas into a foreground phase difference set and a background phase difference set;
acquiring a first phase difference mean value corresponding to the foreground phase difference set;
acquiring a second phase difference mean value corresponding to the background phase difference set;
taking the first phase difference average value as the target foreground phase difference;
and taking the second phase difference average value as the target background phase difference.
3. The method of claim 2, further comprising:
eliminating the maximum phase difference in the phase differences corresponding to the subareas to obtain a residual phase difference set;
the dividing the phase difference corresponding to each sub-region of the at least two sub-regions into a foreground phase difference set and a background phase difference set includes:
and dividing the residual phase difference set into a foreground phase difference set and a background phase difference set.
4. The method according to claim 1, wherein the determining at least two target phase differences from the phase differences corresponding to each sub-region, wherein the at least two target phase differences include a foreground phase difference and a background phase difference, comprises:
acquiring the maximum phase difference and the minimum phase difference in the phase differences of at least two subregions;
taking the minimum phase difference as a foreground phase difference;
and taking the maximum phase difference as a background phase difference.
5. The method of claim 1, wherein the electronic device comprises an image sensor comprising a plurality of pixel groups arranged in an array, each pixel group comprising M x N pixels arranged in an array; each pixel point corresponds to a photosensitive unit, wherein M and N are both natural numbers greater than or equal to 2;
the acquiring a phase difference corresponding to each of the at least two sub-regions includes:
acquiring a target brightness map according to the brightness values of the pixel points included in each pixel point group;
carrying out segmentation processing on the target brightness graph, and obtaining a first segmentation brightness graph and a second segmentation brightness graph according to the segmentation processing result;
determining the phase difference of the matched pixels according to the position difference of the matched pixels in the first segmentation luminance graph and the second segmentation luminance graph;
and determining the phase difference corresponding to each subarea in the at least two subareas according to the phase difference of the mutually matched pixels.
6. The method according to claim 5, wherein the performing the segmentation process on the target luminance map to obtain a first segmentation luminance map and a second segmentation luminance map according to a result of the segmentation process comprises:
performing segmentation processing on the target brightness map to obtain a plurality of brightness map regions, wherein each brightness map region comprises a row of pixels in the target brightness map, or each brightness map region comprises a column of pixels in the target brightness map;
acquiring a plurality of first luminance map regions and a plurality of second luminance map regions from the plurality of luminance map regions, wherein the first luminance map regions comprise pixels in even rows in the target luminance map, or the first luminance map regions comprise pixels in even columns in the target luminance map, and the second luminance map regions comprise pixels in odd rows in the target luminance map, or the second luminance map regions comprise pixels in odd columns in the target luminance map;
and forming the first segmentation luminance map by using the plurality of first luminance map regions, and forming the second segmentation luminance map by using the plurality of second luminance map regions.
7. The method of claim 1, wherein the electronic device comprises an image sensor comprising a plurality of pixel groups arranged in an array, each pixel group comprising M x N pixels arranged in an array; each pixel point corresponds to a photosensitive unit, wherein M and N are both natural numbers greater than or equal to 2; the phase difference corresponding to each subregion comprises a horizontal phase difference and a vertical phase difference;
the acquiring a phase difference corresponding to each of the at least two sub-regions includes:
when the fact that the sub-area contains the horizontal line is detected, taking the vertical phase difference as the phase difference corresponding to the sub-area;
and when detecting that no horizontal line is contained in the sub-area, taking the horizontal phase difference as the phase difference corresponding to the sub-area.
8. The method according to claim 1, wherein the focusing according to each target phase difference to obtain an image corresponding to each target phase difference comprises:
and taking the sub-area corresponding to each target phase difference as a focusing area to obtain an image corresponding to each target phase difference.
9. The method according to any one of claims 1 to 8, wherein the synthesizing according to the image corresponding to each target phase difference to obtain a full-focus image comprises:
dividing the image corresponding to each target phase difference into sub-image areas with the same quantity;
acquiring the definition corresponding to each sub-image area;
determining the sub-image area with the highest definition in the matched sub-image areas according to the definition corresponding to each sub-image area;
and splicing and synthesizing the sub-image areas with the highest definition to obtain a full-focus image.
10. The method according to any one of claims 1 to 8, wherein the synthesizing according to the image corresponding to each target phase difference to obtain a full-focus image comprises:
performing convolution and sampling processing on the image corresponding to each target phase difference, and obtaining a Gaussian pyramid of the image corresponding to each target phase difference when a preset iteration condition is met;
processing each layer of image in the Gaussian pyramid of the image corresponding to each target phase difference to obtain a Laplacian pyramid of the image corresponding to each target phase difference;
fusing the Laplacian pyramid of the image corresponding to each target phase difference to obtain a fused Laplacian pyramid;
and carrying out reconstruction processing according to the fused Laplacian pyramid to obtain a full-focus image.
11. The method according to any one of claims 1 to 8, wherein the synthesizing according to the image corresponding to each target phase difference to obtain a full-focus image comprises:
extracting the characteristics of the image corresponding to each target phase difference;
fusing the characteristics of the image corresponding to each target phase difference to obtain first image characteristics;
averaging the images corresponding to each target phase difference to obtain an average image;
performing feature extraction according to the average image and the first image feature to obtain a second image feature;
and performing characteristic reconstruction according to the second image characteristic and the average image to obtain a full-focus image.
12. The method of claims 1 to 8, wherein the obtaining the preview image comprises:
acquiring a region of interest in a preview image;
the dividing the preview image into at least two sub-regions comprises:
the region of interest in the preview image is divided into at least two sub-regions.
13. An image processing apparatus characterized by comprising:
the preview image acquisition module is used for acquiring a preview image;
a dividing module, configured to divide the preview image into at least two sub-regions;
a phase difference obtaining module, configured to obtain a phase difference corresponding to each of the at least two sub-areas, where the phase difference obtaining module includes: acquiring a first phase difference and a second phase difference corresponding to the same sub-region, and determining the phase difference corresponding to each sub-region based on the first phase difference and the second phase difference corresponding to the same sub-region, wherein a first direction corresponding to the first phase difference and a second direction corresponding to the second phase difference form a preset included angle;
the phase difference obtaining module is further configured to determine at least two target phase differences from the phase difference corresponding to each sub-region, where the at least two target phase differences include a target foreground phase difference and a target background phase difference;
the focusing module is used for focusing according to each target phase difference to obtain an image corresponding to each target phase difference;
and the synthesis module is used for synthesizing the images corresponding to the target phase differences to obtain full focus images.
14. An electronic device comprising a memory and a processor, the memory having stored therein a computer program that, when executed by the processor, causes the processor to perform the steps of the image processing method according to any one of claims 1 to 12.
15. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 12.
CN201911101432.0A 2019-11-12 2019-11-12 Image processing method and device, electronic equipment and computer readable storage medium Active CN112866549B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201911101432.0A CN112866549B (en) 2019-11-12 2019-11-12 Image processing method and device, electronic equipment and computer readable storage medium
PCT/CN2020/126122 WO2021093635A1 (en) 2019-11-12 2020-11-03 Image processing method and apparatus, electronic device, and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911101432.0A CN112866549B (en) 2019-11-12 2019-11-12 Image processing method and device, electronic equipment and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN112866549A CN112866549A (en) 2021-05-28
CN112866549B true CN112866549B (en) 2022-04-12

Family

ID=75912513

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911101432.0A Active CN112866549B (en) 2019-11-12 2019-11-12 Image processing method and device, electronic equipment and computer readable storage medium

Country Status (2)

Country Link
CN (1) CN112866549B (en)
WO (1) WO2021093635A1 (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113259596B (en) * 2021-07-14 2021-10-08 北京小米移动软件有限公司 Image generation method, phase detection focusing method and device
CN113468702B (en) * 2021-07-22 2024-03-22 久瓴(江苏)数字智能科技有限公司 Pipeline arrangement method, pipeline arrangement device and computer readable storage medium
CN113962859B (en) * 2021-10-26 2023-05-09 北京有竹居网络技术有限公司 Panorama generation method, device, equipment and medium
CN114040081A (en) * 2021-11-30 2022-02-11 维沃移动通信有限公司 Image sensor, camera module, electronic device, focusing method and medium
CN115022535B (en) * 2022-05-20 2024-03-08 深圳福鸽科技有限公司 Image processing method and device and electronic equipment
CN115314635B (en) * 2022-08-03 2024-03-26 Oppo广东移动通信有限公司 Model training method and device for defocus determination

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1375053A (en) * 1999-07-14 2002-10-16 索威森公司 Method ans system for measuring the relief of an object
CN105120154A (en) * 2015-08-20 2015-12-02 深圳市金立通信设备有限公司 Image processing method and terminal
CN106060407A (en) * 2016-07-29 2016-10-26 努比亚技术有限公司 Focusing method and terminal
CN106572305A (en) * 2016-11-03 2017-04-19 乐视控股(北京)有限公司 Image shooting method, image processing method, apparatuses and electronic device

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5615756B2 (en) * 2011-03-31 2014-10-29 富士フイルム株式会社 Imaging apparatus and imaging program
KR102149463B1 (en) * 2014-02-19 2020-08-28 삼성전자주식회사 Electronic device and method for processing image
CN105100615B (en) * 2015-07-24 2019-02-26 青岛海信移动通信技术股份有限公司 A kind of method for previewing of image, device and terminal
CN106454289B (en) * 2016-11-29 2018-01-23 广东欧珀移动通信有限公司 Control method, control device and electronic installation
CN110166680B (en) * 2019-06-28 2021-08-24 Oppo广东移动通信有限公司 Device imaging method and device, storage medium and electronic device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1375053A (en) * 1999-07-14 2002-10-16 索威森公司 Method ans system for measuring the relief of an object
CN105120154A (en) * 2015-08-20 2015-12-02 深圳市金立通信设备有限公司 Image processing method and terminal
CN106060407A (en) * 2016-07-29 2016-10-26 努比亚技术有限公司 Focusing method and terminal
CN106572305A (en) * 2016-11-03 2017-04-19 乐视控股(北京)有限公司 Image shooting method, image processing method, apparatuses and electronic device

Also Published As

Publication number Publication date
CN112866549A (en) 2021-05-28
WO2021093635A1 (en) 2021-05-20

Similar Documents

Publication Publication Date Title
CN112866549B (en) Image processing method and device, electronic equipment and computer readable storage medium
KR102278776B1 (en) Image processing method, apparatus, and apparatus
CN109089047B (en) Method and device for controlling focusing, storage medium and electronic equipment
KR102279436B1 (en) Image processing methods, devices and devices
CN110536057B (en) Image processing method and device, electronic equipment and computer readable storage medium
KR101214536B1 (en) Method for performing out-focus using depth information and camera using the same
US9581436B2 (en) Image processing device, image capturing apparatus, and image processing method
JP6802372B2 (en) Shooting method and terminal for terminal
KR20200031168A (en) Image processing method and mobile terminal using dual cameras
US9369693B2 (en) Stereoscopic imaging device and shading correction method
KR20120068655A (en) Method and camera device for capturing iris or subject of good quality with one bandpass filter passing both visible ray and near infra red ray
CN110035206B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN109951641B (en) Image shooting method and device, electronic equipment and computer readable storage medium
CN112087571A (en) Image acquisition method and device, electronic equipment and computer readable storage medium
CN112991245A (en) Double-shot blurring processing method and device, electronic equipment and readable storage medium
JP6544978B2 (en) Image output apparatus, control method therefor, imaging apparatus, program
CN112019734B (en) Image acquisition method and device, electronic equipment and computer readable storage medium
CN112866553B (en) Focusing method and device, electronic equipment and computer readable storage medium
KR20170110695A (en) Image processing method, image processing apparatus and image capturing apparatus
CN112866655B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN112866675B (en) Depth map generation method and device, electronic equipment and computer-readable storage medium
JP2019016975A (en) Image processing system and image processing method, imaging apparatus, program
CN112866547B (en) Focusing method and device, electronic equipment and computer readable storage medium
CN112866554B (en) Focusing method and device, electronic equipment and computer readable storage medium
CN112866546B (en) Focusing method and device, electronic equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant