CN113709365A - Image processing method, image processing device, electronic equipment and storage medium - Google Patents

Image processing method, image processing device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113709365A
CN113709365A CN202110962242.9A CN202110962242A CN113709365A CN 113709365 A CN113709365 A CN 113709365A CN 202110962242 A CN202110962242 A CN 202110962242A CN 113709365 A CN113709365 A CN 113709365A
Authority
CN
China
Prior art keywords
pixel point
image
diffusion
pixel
weight
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110962242.9A
Other languages
Chinese (zh)
Other versions
CN113709365B (en
Inventor
李章宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202110962242.9A priority Critical patent/CN113709365B/en
Publication of CN113709365A publication Critical patent/CN113709365A/en
Application granted granted Critical
Publication of CN113709365B publication Critical patent/CN113709365B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2621Cameras specially adapted for the electronic generation of special effects during image pickup, e.g. digital cameras, camcorders, video cameras having integrated special effects capability
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/272Means for inserting a foreground image in a background image, i.e. inlay, outlay

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

An image processing method, an image processing apparatus, an electronic device, and a storage medium, the method comprising: acquiring aperture shape parameters, and acquiring a blurring strength graph and a weight graph corresponding to a first image, wherein the blurring strength graph at least comprises blurring radiuses corresponding to all first pixel points in a background region of the first image, and the weight graph at least comprises diffusion weights corresponding to all first pixel points in the background region of the first image; performing point diffusion processing on each first pixel point in the background area of the first image according to the aperture shape parameter, the blurring strength graph and the weight graph to obtain a diffusion result corresponding to each first pixel point; and superposing the diffusion results corresponding to the first pixel points in the background area, and blurring the background area according to the superposition results. By implementing the embodiment of the application, a natural and real image blurring effect can be provided.

Description

Image processing method, image processing device, electronic equipment and storage medium
Technical Field
The present application relates to the field of imaging technologies, and in particular, to an image processing method and apparatus, an electronic device, and a storage medium.
Background
At present, most of image blurring effects are realized based on Gaussian blur and light spot mapping. The light spot mapping can mainly virtualize a light source with higher brightness in the image. However, in practice, it is found that although both gaussian blur and flare mapping can achieve the blurring effect, the two are split from each other, which results in unnatural and unreal blurring effect of the whole image.
Disclosure of Invention
The embodiment of the application discloses an image processing method, an image processing device, electronic equipment and a storage medium, which can provide natural and real image blurring effects.
The embodiment of the application discloses an image processing method, which comprises the following steps: acquiring aperture shape parameters, and acquiring a blurring strength graph and a weight graph corresponding to a first image, wherein the blurring strength graph at least comprises blurring radiuses corresponding to all first pixel points in a background region of the first image, and the weight graph at least comprises diffusion weights corresponding to all first pixel points in the background region of the first image; performing point diffusion processing on each first pixel point in the background area of the first image according to the aperture shape parameter, the blurring strength graph and the weight graph to obtain a diffusion result corresponding to each first pixel point; and superposing the diffusion results corresponding to the first pixel points in the background area, and blurring the background area according to the superposition results.
An embodiment of the present application discloses an image processing apparatus, including: the system comprises a fetching module, a calculating module and a calculating module, wherein the fetching module is used for acquiring aperture shape parameters and acquiring a blurring strength graph and a weight graph corresponding to a first image, the blurring strength graph at least comprises blurring radiuses corresponding to all first pixel points in a background area of the first image, and the weight graph at least comprises diffusion weights corresponding to all first pixel points in the background area of the first image; the diffusion module is used for performing point diffusion processing on each first pixel point in the background area of the first image according to the aperture shape parameter, the blurring strength graph and the weight graph to obtain a diffusion result corresponding to each first pixel point; and the superposition blurring module is used for superposing the diffusion results corresponding to the first pixel points in the background area and blurring the background area according to the superposition results.
The embodiment of the application discloses an electronic device, which comprises a memory and a processor, wherein a computer program is stored in the memory, and when the computer program is executed by the processor, the processor is enabled to execute any image processing method disclosed by the embodiment of the application.
The embodiment of the application discloses a computer readable storage medium, wherein a computer program is stored on the computer readable storage medium, and when the computer program is executed by a processor, the computer program realizes any one of the image processing methods disclosed by the embodiment of the application.
Compared with the related art, the embodiment of the application has the following beneficial effects:
the electronic device can obtain the aperture shape parameters and obtain the blurring strength graph and the weight graph corresponding to the first image. The blurring strength graph at least comprises blurring radiuses corresponding to the first pixel points in the background area of the first image, and the weight graph at least comprises diffusion weights corresponding to the first pixel points in the background area of the first image.
The electronic equipment can further perform point diffusion processing on each first pixel point in the background area of the first image according to the aperture shape parameter, the blurring strength graph and the weight graph to obtain a diffusion result corresponding to each first pixel point, and superimposes the diffusion result to perform blurring on the background area according to the superimposition result, so that real lens blurring can be simulated through the point diffusion processing, the problems of Gaussian blurring and splitting of a light spot mapping are solved, the integral blurring effect of the image is harmonious and natural and is integrated, the natural blurring effect of single-lens reflex camera shooting can be simulated, and the texture of the blurring effect is greatly improved. In addition, the electronic equipment can carry out point diffusion processing on each first pixel point in the background region without carrying out light source point position identification, so that the problem that the blurring effect is reduced due to inaccurate light source point position identification can be avoided, and the blurring effect which is more real and natural is favorably provided.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
FIG. 1 is a schematic diagram of an image processing circuit according to an embodiment;
FIG. 2 is a method flow diagram of an image processing method according to one embodiment;
FIG. 3 is a diagram illustrating an example of an embodiment of superimposing two adjacent first pixel points corresponding to diffusion results;
FIG. 4 is a flow diagram illustrating another image processing method disclosed in one embodiment;
FIG. 5 is a flowchart illustrating a method for generating a background mask according to one embodiment;
FIG. 6 is a flowchart illustrating a method for dimming a background area according to one embodiment;
FIG. 7 is a flowchart illustrating a method for color enhancement of highlight regions according to one embodiment;
FIG. 8 is a flowchart illustrating a method for obtaining a luminance weight map according to an embodiment;
FIG. 9 is a flowchart of a method for obtaining a chromaticity weight map according to an embodiment;
FIG. 10 is an exemplary diagram of a first curve, a second curve, and a weight mapping curve according to one embodiment disclosed herein;
FIG. 11 is a flow diagram that illustrates a method for image processing according to an exemplary embodiment;
FIG. 12 is a schematic diagram of an image processing apparatus according to an embodiment;
fig. 13 is a schematic structural diagram of an electronic device disclosed in an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It is to be noted that the terms "comprises" and "comprising" and any variations thereof in the examples and figures of the present application are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
In the related art, the image blurring effect may be realized based on gaussian blur and flare mapping. The gaussian blur is generally based on the convolution of the gather method, and the flare mapping is based on the pixel accumulation of the spread method. Therefore, the two are mutually split, so that the blurring effect respectively achieved by Gaussian blur and the flare mapping are difficult to be integrated. Furthermore, gaussian blur is a simulated lens blur, and in fact has a gap with the real lens blur. The light spot mapping depends on the light source point identification, and the position of the light source point needs to be identified in the image, and then the position of the light source point is blurred. However, if the position of the light source point is not accurately identified, the blurring effect of the light spot in the image is greatly reduced, and a large difference exists between the blurring effect and the real lens blurring effect. In summary, the image blurring effect in the related art has the problems of unnaturalness and unrealistic effect.
The embodiment of the application discloses an image processing method, an image processing device, electronic equipment and a storage medium, which can provide natural and real image blurring effects. The following are detailed below.
Referring to fig. 1, fig. 1 is a schematic structural diagram of an image processing circuit according to an embodiment. The image processing circuit can be applied to electronic equipment such as a smart phone, a smart tablet, a smart watch and the like, but is not limited to the electronic equipment. As shown in fig. 1, the Image Processing circuit may include an imaging device (camera) 110, an attitude sensor 120, an Image memory 130, an Image Signal Processing (ISP) processor 140, a logic controller 150, and a display 160.
The image processing circuitry includes an ISP processor 140 and control logic 150. The image data captured by the imaging device 110 is first processed by the ISP processor 140, and the ISP processor 140 analyzes the image data to capture image statistics that may be used to determine one or more control parameters of the imaging device 110. The imaging device 110 may include one or more lenses 112 and an image sensor 114. Image sensor 114 may include an array of color filters (e.g., Bayer filters), and image sensor 114 may acquire light intensity and wavelength information captured by each imaging pixel and provide a set of RAW image data (RAW image data) that may be processed by ISP processor 140. The attitude sensor 120 (e.g., a three-axis gyroscope, hall sensor, accelerometer, etc.) may provide parameters of the acquired image processing (e.g., anti-shake parameters) to the ISP processor 140 based on the type of interface of the attitude sensor 120. The attitude sensor 120 interface may employ an SMIA (Standard Mobile Imaging Architecture) interface, other serial or parallel camera interfaces, or a combination thereof.
In addition, the image sensor 114 may also transmit raw image data to the attitude sensor 120, the attitude sensor 120 may provide the raw image data to the ISP processor 140 based on the type of interface of the attitude sensor 120, or the attitude sensor 120 may store the raw image data in the image memory 130.
The ISP processor 140 processes the raw image data pixel by pixel in a variety of formats. For example, each image pixel may have a bit depth of 8, 10, 12, or 14 bits, and the ISP processor 140 may perform one or more image processing operations on the raw image data, gathering statistical information about the image data. Wherein the image processing operations may be performed with the same or different bit depth precision.
The ISP processor 140 may also receive image data from the image memory 130. For example, the attitude sensor 120 interface sends raw image data to the image memory 130, and the raw image data in the image memory 130 is then provided to the ISP processor 140 for processing. The image Memory 130 may be a portion of a Memory device, a storage device, or a separate dedicated Memory within an electronic device, and may include a DMA (Direct Memory Access) feature.
Upon receiving raw image data from the image sensor 114 interface or from the attitude sensor 120 interface or from the image memory 130, the ISP processor 140 may perform one or more image processing operations, such as temporal filtering. The processed image data may be sent to image memory 130 for additional processing before being displayed. ISP processor 140 receives processed data from image memory 130 and performs image data processing on the processed data in the raw domain and in one or more color spaces YUV, RGB, YCbCr, etc. The image data processed by ISP processor 140 may be output to display 160 for viewing by a user and/or further processed by a Graphics Processing Unit (GPU). Further, the output of the ISP processor 140 may also be sent to the image memory 130, and the display 160 may read image data from the image memory 130. In one embodiment, image memory 130 may be configured to implement one or more frame buffers.
The statistics determined by the ISP processor 140 may be sent to the control logic 150. For example, the statistical data may include image sensor 114 statistics such as gyroscope vibration frequency, auto-exposure, auto-white balance, auto-focus, flicker detection, black level compensation, lens 112 shading correction, and the like. The control logic 150 may include a processor and/or microcontroller that executes one or more routines (e.g., firmware) that may determine control parameters of the imaging device 110 and control parameters of the ISP processor 140 based on the received statistical data. For example, the control parameters of the imaging device 110 may include attitude sensor 120 control parameters (e.g., gain, integration time of exposure control, anti-shake parameters, etc.), camera flash control parameters, camera anti-shake displacement parameters, lens 112 control parameters (e.g., focal length for focusing or zooming), or a combination of these parameters. The ISP control parameters may include gain levels and color correction matrices for automatic white balancing and color adjustment (e.g., during YUV processing), as well as lens 112 shading correction parameters.
In one embodiment, the imaging device 110 may take a first image and transmit the first image to the ISP processor 140 or store the first image in the image memory 130. The ISP processor may obtain the first image, the aperture shape, and a blurring strength map and a weight map corresponding to the first image, where the blurring strength map may at least include blurring radii corresponding to each first pixel in a background region of the first image, and the weight map may at least include diffusion weights corresponding to each first pixel in the background region of the first image.
The ISP processor 140 may perform point diffusion processing on each first pixel point in the background region of the first image according to the acquired aperture shape parameter, blurring strength map, and weight map, to obtain a diffusion result corresponding to each first pixel point. And, the ISP processor 140 may further superimpose the diffusion results corresponding to each first pixel point in the background region, and blurring the background region of the first image according to the superimposed result.
In some embodiments, the ISP processor 140 may further send a second image obtained by blurring the background region of the first image to the display 160, so as to display the second image through the display 160.
Referring to fig. 2, fig. 2 is a flowchart illustrating a method of processing an image according to an embodiment of the disclosure, where the method is applicable to the electronic device. As shown in fig. 2, the method may include the steps of:
210. and acquiring aperture shape parameters and acquiring a blurring force graph and a weight graph corresponding to the first image.
The aperture shape parameters may be input by a user and may be used to simulate the aperture shapes of different lenses. For example, the aperture shape parameters may model the aperture shape including: circular, heart-shaped, star-shaped, irregular, etc., but is not limited thereto.
The first image may be an image captured by an imaging device of the electronic device, or an image acquired by the electronic device from another terminal or a service device, which is not limited specifically. The first image may include a foreground region and a background region, the foreground region may refer to a region occupied in the image by a photographic subject such as a human figure, and the background region may refer to a region of the image other than the foreground region in the first image. Taking the foreground region as the portrait region as an example, the electronic device may distinguish the portrait region and the background region of the first image by image recognition methods such as portrait matting or portrait segmentation.
The blurring strength map corresponding to the first image may at least include blurring radii corresponding to the first pixel points in the background region of the first image. The first pixel point may be any pixel point in the background region of the first image. The blurring radius can be used to indicate the blurring degree, and the larger the blurring radius is, the more blurred the blurring processed image is. In the blurring strength map corresponding to the first image, the blurring radii corresponding to different first pixel points may be the same or different.
The blurring strength map corresponding to the first image may be generated according to depth information of each first pixel point in a background region of the first image, and the depth information may be used to indicate a physical distance between an object in the background region and the imaging device. For example, the depth information of the first pixel point may be in a positive correlation with the blurring radius corresponding to the first pixel point. The electronic device may perform depth estimation on the first image through methods such as structured light, Time of Flight (TOF), binocular stereo imaging, monocular phase detection, monocular depth estimation based on depth learning or machine learning, and the like, so as to obtain at least depth information of each first pixel point in the background region of the first image, but is not limited thereto.
The weight map corresponding to the first image may at least include a diffusion weight corresponding to each first pixel in the background region of the first image. The weight map corresponding to the first image can be used for guiding the point diffusion processing of the first pixel point. In the weight map corresponding to the first image, the diffusion weights corresponding to different first pixel points may be the same or different. The weight map corresponding to the first image may be generated according to the pixel value of each first pixel point in the background region of the first image. Among other things, a pixel value may include a luma component value and a chroma component value.
Illustratively, the electronic device may perform image data processing of the first image in YUV color space through the ISP processor. The luminance component value of the first pixel may be a Y component value of the first pixel on a Y channel, and the chrominance component value of the first pixel may include a U component value of the first pixel on a U channel and a V component value of the first pixel on a V channel.
And performing point diffusion processing on each first pixel point in the background area of the first image according to the aperture shape parameter, the blurring strength graph and the weight graph corresponding to the first image to obtain a diffusion result corresponding to each first pixel point.
The electronic device may traverse each first pixel in the first image background region. Aiming at the first pixel point accessed currently, the electronic equipment can acquire the virtualization radius corresponding to the first pixel point accessed currently from the virtualization force graph corresponding to the first image, acquire the diffusion weight corresponding to the first pixel point accessed currently from the weight graph corresponding to the first image, and perform point diffusion processing on the first pixel point accessed currently according to the aperture shape parameter and the virtualization radius and the diffusion weight respectively corresponding to the first pixel point accessed currently so as to obtain the diffusion result corresponding to the first pixel point accessed currently.
The Point diffusion process may refer to simulating real lens blur by a Point Spread Function (PSF), and diffusing a pixel value of the first pixel Point in the first image into a specific aperture shape according to a diffusion weight and a blurring radius corresponding to the first pixel Point.
That is, the diffusion result corresponding to the first pixel point may include a diffusion range corresponding to the first pixel point, a target pixel value of each second pixel point within the diffusion range corresponding to the first pixel point, and the number of diffusion times of each second pixel point within the diffusion range corresponding to the first pixel point. The diffusion range corresponding to the first pixel point takes the first pixel point as the center, the radius of a circumscribed circle of the diffusion range is the virtual radius corresponding to the first pixel point, and the outer contour of the diffusion range can be the aperture shape indicated by the aperture shape parameters. The second pixel point can be any pixel point in the diffusion range corresponding to the first pixel point, and therefore each second pixel point in the diffusion range corresponding to the first pixel point comprises the first pixel point located in the center of the diffusion range.
And superposing the diffusion results corresponding to the first pixel points in the background area, and blurring the background area according to the superposition results.
The electronic device superimposes the diffusion results corresponding to each first pixel point in the background area, which may include: and superposing the target pixel values of the second pixel points in the diffusion range corresponding to the first pixel points in the background region, and superposing the diffusion times of the second pixel points in the diffusion range corresponding to the first pixel points.
Therefore, the superposition result may include the superimposed pixel value of each first pixel in the background region and the total diffusion number of each first pixel in the background region after being superimposed.
For example, referring to fig. 3, fig. 3 is an exemplary diagram illustrating a superposition of diffusion results corresponding to two adjacent first pixel points according to an embodiment. As shown in fig. 3, the diffusion range corresponding to the first pixel 310 may include 9 second pixels, and the target pixel values of the 9 second pixels may be a1-a9, respectively. The diffusion range corresponding to the first pixel 320 may include 9 second pixels, and the target pixel values of the 9 second pixels may be B1-B9, respectively.
The superimposed pixel value of the first pixel point 310 may be a1+ B6; the superimposed pixel value of the first pixel 320 may be B1+ a 2.
The manner of superimposing the diffusion times of each second pixel point within the diffusion range corresponding to each first pixel point by the electronic device is similar to the manner of superimposing the target pixel value of each second pixel point, and the details are not repeated below.
The electronic equipment superposes the diffusion results corresponding to the first pixel points in the background area, and after the superposition results are obtained, the background area can be blurred according to the superposition results. Wherein, the blurring the background region by the electronic device according to the superposition result may include: and aiming at each first pixel point in the background area, calculating the virtual pixel value of the first pixel point according to the superposed pixel value of the first pixel point and the superposed total diffusion number. For example, the superimposed pixel value of the first pixel point may be divided by the total number of the superimposed diffusion to obtain a virtualized pixel value of the first pixel point as a result of the virtualized rendering.
The electronic equipment carries out point diffusion processing on each first pixel point in the background area according to corresponding weight, under the condition of mutual overlapping, bright points with higher brightness in the first image form highlight light spots, and dark points with lower brightness in the first image form hazy circle.
Therefore, in the embodiment, the electronic device can simulate real lens blurring through point diffusion processing, the problem of splitting of Gaussian blurring and light spot mapping is solved, the overall blurring effect of the image is harmonious and natural and is integrated, the blurring natural effect and the light spot bright and transparent effect of single-lens reflex shooting can be simulated, and the texture of the blurring effect is greatly improved. In addition, the electronic equipment can carry out point diffusion processing on each first pixel point in the background region without carrying out light source point position identification, so that the problem that the blurring effect is reduced due to inaccurate light source point position identification can be avoided, and the blurring effect which is more real and natural is favorably provided.
To more clearly illustrate the point spread processing, please refer to fig. 4, fig. 4 is a flowchart illustrating another image processing method disclosed in an embodiment, which can be applied to any of the electronic devices described above. As shown in fig. 4, the method may include the steps of:
and acquiring aperture shape parameters and acquiring a blurring force graph and a weight graph corresponding to the first image.
In some embodiments, the blurring strength map corresponding to the first image may include, in addition to the blurring radius corresponding to each first pixel point in the background region of the first image, a blurring radius corresponding to each third pixel point in the foreground region of the first image. That is, the blurring strength map corresponding to the first image may include the blurring radius corresponding to each pixel point of the whole first image. When the electronic device generates the blurring strength map corresponding to the first image, the blurring radius corresponding to each pixel point can be generated according to the depth information of each pixel point contained in the first image.
The weight map corresponding to the first image may include, in addition to the diffusion weight corresponding to each first pixel in the background region of the first image, a diffusion weight corresponding to each third pixel in the foreground region of the first image. That is, the weight map corresponding to the first image may include the diffusion weight corresponding to each pixel point of the full map of the first image. When the electronic device generates the weight map corresponding to the first image, the electronic device may generate the diffusion weight corresponding to each pixel according to the pixel value of each pixel included in the first image.
And aiming at each first pixel point in the background area of the first image, acquiring a blurring radius corresponding to the first pixel point from the blurring strength graph, and acquiring a diffusion weight corresponding to the first pixel point from the weight graph.
The electronic device can generate a background mask indicating a background region of the first image to identify first pixels contained in the background region of the first image.
The electronic device can traverse each first pixel point in the background mask, obtain a blurring radius corresponding to the currently visited first pixel point from the blurring strength map, and obtain a diffusion weight corresponding to the currently visited first pixel point from the weight map.
And determining a first aperture core corresponding to the first pixel point according to the virtual radius of the first pixel point and the aperture shape parameter.
The electronic device may pre-generate, store, or retrieve multiple aperture kernels from other devices, such as a service device, and different aperture kernels may be used to indicate different diffusion ranges. Different aperture cores may correspond to different virtualization radii, each aperture core may correspond to an aperture shape, and the same aperture shape may correspond to a plurality of aperture cores having different virtualization radii.
When the electronic device obtains the aperture shape parameter and the virtualization radius of the first pixel point, a first aperture kernel corresponding to the virtualization radius of the first pixel point and corresponding to the aperture shape indicated by the aperture shape parameter can be determined from the plurality of aperture kernels, and the first aperture kernel can be used for indicating the diffusion range of the first pixel point.
Alternatively, the electronic device may generate a plurality of aperture kernels in advance from the aperture image. The electronic device may obtain a plurality of aperture images with different aperture shapes, and the aperture shapes may be a personalized customized graphic, a heart shape, a star shape, a circle, a ring, or the like, but are not limited thereto.
The electronic device may scale each of the plurality of aperture images a plurality of times, each time the scale is performed, an aperture kernel corresponding to a virtual radius may be obtained. Thus, after the electronic device performs multiple zooms on each aperture image, a plurality of aperture kernels corresponding to each aperture shape may be obtained, and each aperture kernel may correspond to one blurring radius. For example, the electronic device may sequentially scale the aperture image to a size of 3 × 3, 4 × 4 … … N × N.
440. And calculating the target pixel value of each second pixel point in the diffusion range corresponding to the first pixel point according to the pixel value of the first pixel point in the first image, the diffusion weight corresponding to the first pixel point and the first aperture kernel.
The electronic device may multiply the pixel value of the first pixel point in the first image by the diffusion weight corresponding to the first pixel point, and then multiply by the first aperture kernel, so as to obtain the target pixel value of each second pixel point within the diffusion range corresponding to the first pixel point. That is, the pixel value of the first pixel point is diffused into the diffusion range centered on the first pixel point.
In some embodiments, the pixel value of the first pixel point may comprise a luma component value and a chroma component value. Correspondingly, the weight map may also include a luminance weight map and a chrominance weight map, the diffusion weight may include a luminance weight and a chrominance weight, the luminance weight map may include at least a luminance weight of each first pixel in the background region, and the chrominance weight map may include at least a chrominance weight of each first pixel in the background region.
When performing point diffusion processing on the first pixel point, the electronic device may perform point diffusion processing on the luminance component value and the chrominance component value of the first pixel point respectively according to the corresponding diffusion weights, and then perform fusion when superimposing.
For example, the luminance component value of the first pixel point may be a Y component value of a Y channel, and the chrominance component values may include a U component value of a U channel and a V component value of a V channel.
For each first pixel point, the electronic device may multiply the Y component value of the first pixel point by the luminance weight of the first pixel point, and then multiply by the first aperture kernel, to obtain the target Y component value of each second pixel point in the Y channel within the diffusion range corresponding to the first pixel point.
The electronic device may multiply the U component value of the first pixel point by the chromaticity weight of the first pixel point, and then multiply by the first aperture kernel, to obtain a target U component value of each second pixel point in the U channel within the diffusion range corresponding to the first pixel point.
The electronic device may multiply the V component value of the first pixel point by the chromaticity weight of the first pixel point, and then multiply by the first aperture kernel, to obtain a target V component value of each second pixel point in the V channel within the diffusion range corresponding to the first pixel point.
And calculating the diffusion times of each second pixel point in the diffusion range corresponding to the first pixel point according to the diffusion weight corresponding to the first pixel point and the first aperture kernel.
The electronic device may multiply the diffusion weight corresponding to the first pixel point by the first aperture kernel, so that the number of diffusion times of each second pixel point in the diffusion range corresponding to the first pixel point may be obtained.
After the electronic device performs steps 440 to 450, diffusion results corresponding to the first pixel points may be obtained, and the diffusion results corresponding to each first pixel point in the background region are superimposed to blurring the background region.
In some embodiments, the electronic device may superimpose the diffusion results corresponding to the first pixel points in the background region by performing the following steps.
460. And superposing the target pixel value of each second pixel point contained in the diffusion range corresponding to each first pixel point in the background region to the pixel value superposition graph, and superposing the diffusion times of each second pixel point contained in the diffusion range corresponding to each first pixel point in the background region to the time statistic graph.
The pixel value overlay may be used to record a dot diffusion overlay result of the pixel values, the size of the pixel value overlay may be greater than or equal to the first image, and the initial value of the pixel value overlay may be 0. The number of color channels of the pixel overlay may coincide with the number of color channels of the first image, and each channel may be used to record point spread overlay results for component values of the same channel.
The time statistic map may be used to record a dot diffusion superposition result of the diffusion times, a size of the time statistic map may be greater than or equal to the first image, and an initial value of the time statistic map may be 0.
Optionally, in order to retain the point diffusion effect of the edge pixel point of the first image, the edge range of the pixel value overlay image and the edge range of the frequency statistical image may be extended according to the maximum blurring radius in the blurring strength image, so that the image size of the pixel value overlay image is larger than that of the first image.
The electronic device may superimpose the target pixel value of each second pixel point included in the diffusion result on the pixel value superimposed graph and superimpose the diffusion number of each second pixel point included in the diffusion result on the number-of-times statistical graph after calculating the diffusion result corresponding to each first pixel point. And repeating the steps until the diffusion results corresponding to the first pixel points in the background area are all superposed to the pixel superposition graph and the frequency statistical graph.
Blurring each first pixel point in the background area according to the pixel value of each first pixel point in the superposed pixel value superposed image in the background area and the total diffusion number of each first pixel point in the superposed times statistical image in the background area.
The pixel value of each first pixel point included in the background region in the superimposed pixel value superimposed image may be obtained by diffusion and superimposition of the pixel values of one or more first pixel points, and the total diffusion count of each first pixel point in the superimposed frequency statistical image may also be obtained by superimposition of the diffusion frequencies of one or more first pixel points.
Therefore, for each first pixel point, the electronic device may divide the pixel value obtained after the first pixel points are superimposed by the total diffusion number of the first pixel points to obtain a blurring rendering result for blurring the first pixel points.
Illustratively, blurring the first pixel point may be calculated by the following formula:
Figure BDA0003222726160000111
the method includes the steps that Bokeh can represent a pixel value of a first pixel after blurring, kernel can represent a first aperture kernel, weight can represent a diffusion weight corresponding to the first pixel, and input can represent a pixel value of the first pixel in a first image, namely the pixel value before blurring.
Therefore, in the foregoing embodiment, the background mask may be used to indicate the background area of the first image, so that the electronic device may perform point diffusion processing on each first pixel point included in the background area, so that the bright points in the background area form light spots, and the dark points form a circle of confusion, thereby simulating the blur effect of a real lens more truly and naturally. In addition, different aperture shapes can be simulated through different aperture kernels, so that the blurred light spots can be in different shapes.
In the foregoing embodiment, the background mask may be used to indicate a background region of the first image, and may be used to indicate a range of pixel points that need to be subjected to the point diffusion processing, so as to avoid falsely blurring a foreground region of the first image.
In some embodiments, the electronic device can generate the background mask using the blurring strength map of the first image.
Referring to fig. 5, fig. 5 is a flowchart illustrating a method for generating a background mask according to an embodiment, which is applicable to the electronic device. As shown in fig. 5, the method may include the steps of:
510. and removing the portrait area in the first image according to the blurring strength graph of the first image.
The foreground region of the first image may include a portrait region, and the portrait region is not usually subjected to blurring processing, so that a region where a pixel point with a blurring radius of 0 in the blurring strength map is located corresponds to the portrait region. The electronic device removes the pixel point with the blurring radius of 0 in the first image, so that the portrait area can be removed in the first image.
The blurring strength map may be generated according to depth information of each pixel point of the first image full map. The pixel points with smaller depth can be considered to belong to the portrait area, so that the blurring radius corresponding to the pixel points with smaller depth can be set to be 0. Therefore, the blurring strength map of the first image can help to identify the portrait area to a certain extent, and the electronic device can remove the pixel point with the blurring radius of 0 in the first image to remove the portrait area.
Optionally, before removing the portrait area in the first image, the electronic device may expand the portrait area in the blurring strength map of the first image, including: and corroding the blurring strength graph to expand the range of the portrait area. Wherein, the erosion radius corresponding to each pixel point in the blurring strength graph can be in positive correlation with the blurring radius corresponding to each pixel point. That is, if the larger the blurring radius corresponding to a certain pixel point in the blurring strength map is, the larger the erosion radius corresponding to the pixel point is.
Compared with the blurring strength map before etching, the number of pixels with a blurring radius of 0 in the blurring strength map after etching may be increased. The electronic device may remove the portrait area in the first image according to the blurring strength map after the portrait area is expanded, for example, may remove a pixel point with a radius of 0 in the blurring strength map after erosion in the first image, so as to remove the portrait area in the first image.
520. Removing the hair region in the first image from which the portrait region is removed according to the hair mask to obtain a background mask of the first image.
The foreground region of the first image may also include a hair region. The electronic device may perform hair identification on the first image, for example, identify a confidence that each pixel point in the first image belongs to hair by a deep learning method, thereby generating a hair mask.
The hair mask may be used to indicate a hair region in the first image, the hair mask may include a confidence that individual pixel points in the first image are identified as hair, and pixel points with a confidence above a confidence threshold may be used to indicate a hair region in the first image. The electronic device may subtract the hair region indicated by the hair mask from the first image with the portrait region removed, thereby removing the hair region, resulting in a background mask for the first image.
Alternatively, although a method such as deep learning may identify the hair region in the first image, there may be some error. To reduce the impact of hair recognition errors, the electronic device may expand the hair mask of the first image prior to removing the hair region. The electronic equipment can count the area of the hair outline in the hair mask, calculate the expansion radius according to the area proportion of the hair outline, and increase the expansion radius according to the virtual radius of the hair outline pixel points. The area of the outer contour of the hair region can indicate the number of pixel points contained in the outer contour of the hair region, and the increase of the virtual radius and the expansion radius can be in positive correlation. That is, if the blurring radius corresponding to a certain pixel point in the hair mask is larger, the expansion radius corresponding to the pixel point is larger.
The electronic device can expand the hair mask according to the expansion radius of the pixel points to expand the hair region. In addition, the electronic equipment can further increase the confidence of the pixel points in the hair mask so as to increase the number of the pixel points identified as hair, and can further expand the hair area.
The electronic device may subtract the hair region after the hair mask expansion from the first image from which the portrait region is removed to obtain a background mask of the first image, so that a problem of Color leakage (Color Bleeding) due to the hair strands being mistakenly blurred and diffused into halo forming Color bands may be reduced.
In some embodiments, the electronic device may further perform inpainting (Inpaint) filling on a portrait area and a hair outline area of the first image before generating the blurring strength map of the first image. That is, the portrait area and the hair outline area can be filled with the pixel points of the background area adjacent to the portrait area and the hair outline area to delete the light spots at the edges of the portrait and the hair. The blurring strength graph is generated after the first image is subjected to Inpaint filling, and the background mask is generated according to the blurring strength graph, so that the problem of color leakage can be optimized, the processing of the first image can be in accordance with the gradient trend, texture characteristics, color characteristics and the like of the original image, and the blurring effect of the first image is more real and natural.
Therefore, in the foregoing embodiment, the electronic device may process the portrait area and the hair area of the first image to more accurately identify the background area of the first image, so that the problem of color leakage caused by mistakenly blurring pixel points of portrait edges and hair edges may be reduced, and the blurring effect of the first image is more real and natural.
Further, in the description of the foregoing embodiment, the dot diffusion process may blur the bright point of higher luminance in the first image into a light spot by diffusion. If the overall brightness of the first image background area is high, the light spots in the blurring effect may be diffused too much, and the blurring effect is unnatural and not real.
Thus, in some embodiments, the electronic device may pre-process the first image, including: the electronic device may reduce a luminance component value of each first pixel in the first image background region. The reduction amount of the brightness component value of the first pixel point can be in positive correlation with the brightness of the first image shooting scene. That is, the brighter the shooting scene of the first image is, the more the brightness component value of the first pixel point is depressed.
For example, referring to fig. 6, fig. 6 is a flowchart illustrating a method for dimming a background area according to an embodiment, where the method is applicable to the electronic device. As shown in fig. 6, the following steps may be included:
610. a Gamma (Gamma) parameter for brightness dimming is calculated.
The Gamma parameter can be used for Gamma correction, and can be calculated by the following formula:
Figure BDA0003222726160000141
where Gamma _1 may be a Gamma parameter for brightness dimming, Contrast may be an input Contrast parameter, and iso may be a sensitivity parameter of an imaging apparatus that captures the first image.
620. And performing curve mapping stretching on the brightness component value of each first pixel point based on the calculated Gamma parameter so as to reduce the brightness component value of the first pixel point. The stretched luminance component value of each first pixel point of the first image background area may be calculated with reference to the following formula:
Figure BDA0003222726160000142
wherein y1 may represent the luma component value after stretching, x1 may represent the luma component value before stretching, and the luma component value before stretching may refer to the luma component value of the first pixel in the first image.
Through the formula (2) and the formula (3), the electronic device can reduce the brightness component value of each first pixel point in the first image background area, and the brighter the shooting scene is, the more the brightness component value is reduced, so that the background can be darkened and the light source can be highlighted, and the contrast can be improved.
In some embodiments, the electronic device pre-processes the first image, and may further include: the electronic device can increase the chromaticity component value of each first pixel point in the highlight area in the first image background area so as to enhance the color of the highlight area and increase the color of the light source center. The brightness component value of each first pixel point included in the highlight area is higher than the brightness threshold, and the brightness threshold can be set according to actual service requirements, and is not particularly limited. That is, the highlight region may be a relatively bright image region in the background region. In addition, the amount of increase of the chrominance component of the first pixel point included in the highlight area may be in a negative correlation with the luminance of the first image capturing scene. That is, the darker the shooting scene of the first image is, the more the chromaticity component values of the first pixel points included in the highlight area are increased.
For example, referring to fig. 7, fig. 7 is a flowchart illustrating a method for color enhancement of a highlight region according to an embodiment, where the method is applicable to the electronic device. As shown in fig. 7, the following steps may be included:
710. the chroma component values of the first image are blurred.
The chrominance component values of the first image may include a U component value and a V component value, and the electronic device may image-blur the U component value of the first image overall view and the V component value of the first image overall view, respectively. The way of image blurring may include, but is not limited to: nxn Block mean blur, median blur, etc.
Generally, since the energy of the center of the light source is large, the upper limit of the load of the photosensitive element (sensor) of the imaging device may be exceeded, resulting in a relatively weak color of the center of the light source in the captured first image. And if the chroma component value of the first image is blurred, the chroma component value of the region with weak color can be reinforced by using the chroma component values of the neighborhood to make up for the problem caused by the bearing capacity of the sensor.
720. And calculating a Gamma parameter for color enhancement according to the saturation parameter, and performing color enhancement on the first image whole image by using the calculated Gamma parameter to obtain a color-enhanced first image.
The electronic device may perform curve mapping stretching on the chrominance component values of the first image overall graph by using the Gamma parameter for performing color enhancement, so as to perform color enhancement on the first image overall.
The Gamma parameter for color enhancement can be calculated by the following formula:
Figure BDA0003222726160000151
wherein, Gamma _2 may be a Gamma parameter for color enhancement, Saturation may be an input Saturation parameter, and iso may be a sensitivity parameter of an imaging device capturing the first image.
The curve mapping stretching of the chrominance component values of the first image global graph can be calculated by the following formula:
Figure BDA0003222726160000152
wherein i may be a chroma component value, the LUT may be a stretched chroma component value, and the chroma component value before stretching may refer to the chroma component value of the first pixel point in the first image.
Further, after curve-mapping stretching the chroma component values of the first image overall image, the electronic device needs to refill the achromatic regions whose chroma component values before stretching are 128 as they are. That is, the chromaticity component of the achromatic region, which is 128 for the chromaticity component before stretching, may be kept uniform before and after stretching.
730. A high photomask for the first image is calculated.
The highlight mask of the first image may be used to indicate highlight regions in the first image, which may include highlight regions in a background region. The electronic device may perform stretching mapping on the brightness component values of the whole first image to obtain a high photomask of the first image, where the high photomask may include the stretched brightness component of each pixel point in the first image. The stretch mapping of the luminance component values may be calculated by the following formula:
Figure BDA0003222726160000161
wherein y2 may represent the luma component value after stretching, x2 may represent the luma component value before stretching, and the luma component value before stretching may refer to the luma component value of the first pixel in the first image.
As can be seen from the above formula (6), if the value of the luminance component before stretching is low, the value of the luminance component after stretching may be 0, and if the value of the luminance component before stretching is higher, the value of the luminance component after stretching is higher. Therefore, the highlight mask obtained by stretching the luminance component value may be used to indicate the highlight region where the luminance component value of the pixel point in the first image is higher than the luminance threshold.
740. And performing color enhancement on the highlight area of the first image according to the color-enhanced first image and the highlight mask of the first image.
The electronic device can fuse the highlight mask and the color-enhanced first image, and can at least improve the chromaticity component value of each first pixel point in the highlight area in the background area, so that the color enhancement can be performed on the highlight area, and the color of the light source center is improved.
It can be seen that, in the foregoing embodiment, the electronic device may perform preprocessing on the first image, correct the luminance component value of the first image to improve the overall contrast of the image, and correct the chrominance component value of the first image to improve the saturation at the light source, so that when blurring the first image, too much light spot diffusion may be avoided as much as possible, and the color at the center of the light source may be improved, so that the blurring effect is more natural and real.
In the foregoing embodiment, the electronic device may perform point diffusion processing on the background region of the first image according to the shape of the aperture, and different diffusion weights and blurring radii, so that the blurring effect on the first image may simulate real lens blur. The diffusion weight is one of the important factors in the point diffusion process, and the following description describes an embodiment of obtaining the weight map by the electronic device.
In an embodiment, the electronic device may calculate, according to the brightness component value of each first pixel point in the background region of the first image, a brightness weight corresponding to each first pixel point in the background region to obtain a brightness weight map.
The luminance weight map may include luminance weights corresponding to the respective first pixel points in the background region, and the luminance weights corresponding to the respective first pixel points may be in a positive correlation with the luminance component values of the respective first pixel points. That is, the larger the brightness component value of the first pixel point is, the larger the brightness weight corresponding to the first pixel point is.
The electronic device may perform one or more kinds of processing on the luminance component value of each first pixel point in the background region, and use the processed luminance component value as the luminance weight corresponding to each first pixel point.
In one embodiment, the electronic device may calculate a chroma weight corresponding to each first pixel point of the background region according to the luminance component value and the chroma component value of each first pixel point in the background region of the first image.
The chromaticity weight map may include chromaticity weights corresponding to the first pixel points in the background region, the chromaticity weights corresponding to the first pixel points may be in a positive correlation with the luminance component values of the first pixel points, and the chromaticity weights corresponding to the first pixel points may also be in a positive correlation with the chromaticity component values of the first pixel points. That is, the larger the luminance component value and the chrominance component value of the first pixel point are, the larger the chrominance weight corresponding to the first pixel point is.
In some embodiments, the electronic device may directly superimpose the chrominance component value and the luminance component value of each first pixel point in the background region, and use the superimposed component value as the chrominance weight corresponding to each first pixel point. Or, the electronic device also performs one or more kinds of processing on the chrominance component value and the luminance component value of each first pixel point, superimposes the processed chrominance component value and/or luminance component value, and uses the superimposed component value obtained by superimposing as the chrominance weight corresponding to each first pixel point.
For example, if the first image includes Y, U, V color channels, the electronic device may calculate a luminance weight corresponding to each first pixel point according to a Y component value of each first pixel point in the background region in the Y channel; and the electronic equipment can also calculate the corresponding chromaticity weight of each first pixel point according to the Y component value of each first pixel point in the background area, the U component value of each first pixel point in the U channel and the V component value of each first pixel point in the V channel.
For example, the electronic device may also acquire a second image having the same shooting content as the first image, where the second image has a lower exposure value than the first image. That is, the second image may be a dark frame of the first image. The electronic device may obtain a pixel value of the first pixel in the second image as a luminance component value of the first pixel. The overall energy of the second image is low, the energy distribution rule in the shooting scene can be directly reflected, the pixel value in the second image is used as the brightness component value to generate the diffusion weight, the point diffusion processing diffusion result can be optimized, and the blurring effect is more natural and real.
The luminance weight map and the chrominance weight map are described separately below.
Referring to fig. 8, fig. 8 is a flowchart illustrating a method for obtaining a luminance weight graph according to an embodiment, where the method is applicable to the electronic device. As shown in fig. 8, the method may include the steps of:
810. the luminance weight corresponding to the first pixel point in the background region of the first image having a luminance component value smaller than the luminance threshold value is set to zero.
820. The luminance weight corresponding to the first pixel point included in the highlight region is calculated from the highlight region area included in the background region of the first image.
The brightness component values of the first pixels included in the highlight area are all greater than or equal to the brightness threshold, and the brightness threshold can be set according to actual business requirements, and is not limited specifically. The area of the highlight region can be represented by the number of first pixels included in the highlight region.
Alternatively, if the area of the highlight area is smaller than the area threshold, the electronic device may set the luminance weight corresponding to each first pixel point included in the highlight area to zero.
Optionally, if the area of the highlight area is greater than or equal to the area threshold, the electronic device may decrease the luminance component value of each first pixel point included in the highlight area according to the area of the highlight area, and use the decreased luminance component value as the luminance weight corresponding to each first pixel point included in the highlight area.
The area threshold may be set according to actual service requirements, and is not particularly limited. The reduction amount of the brightness component value of each first pixel point included in the highlight area has a positive correlation with the area of the highlight area. That is, the larger the area of the highlight region is, the more the luminance component value of the first pixel in the highlight region is truncated (decreased). By reducing the brightness component value of the large-area high-light area, the energy of the dense light source area in the first image can be reduced, so that the problem of overexposure caused by light spot stacking after point diffusion and superposition is avoided.
In some embodiments, the background region of the first image may include two or more highlight regions that are not interconnected. Since the decrease amount of the luminance component value is associated with the highlight region area, the electronic device may expand each highlight region to connect two adjacent highlight regions before performing the aforementioned step 920, so that two highlight regions with a short distance may be connected to form one highlight region. After the respective highlight regions are expanded, the electronic device may count the areas of the respective highlight regions and perform the aforementioned step 820 to calculate the luminance weight corresponding to each first pixel point included in the highlight regions.
Therefore, in the foregoing embodiments, the electronic device can reduce the energy in the dense light source region, and avoid the light spot stack overexposure.
Referring to fig. 9, fig. 9 is a flowchart illustrating a method for obtaining a chromaticity weight diagram according to an embodiment, where the method is applicable to the electronic device. As shown in fig. 9, the method may include the steps of:
910. and calculating the chroma value of each first pixel point according to the chroma component value of each first pixel point in the background area.
If the chrominance component values include a U component value of the U channel and a V component value of the V channel, the electronic device may calculate the chrominance value according to the U component value and the V component value first when superimposing the luminance component value and the chrominance component value of the first pixel point.
Illustratively, the chroma value may be calculated by the following formula:
Figure BDA0003222726160000191
where chroma may be a chroma value, U may be a U component value, and V may be a V component value.
920. And superposing the brightness component value of each first pixel point in the background area and the chroma component value of each first pixel point in the background area to obtain a component superposition value of each first pixel point in the background area.
The electronic device may superimpose the chrominance value and the luminance component value of each first pixel point in the background region, thereby obtaining a component superimposed value of each first pixel point.
Optionally, before the electronic device superimposes the luminance component value of each first pixel point in the background region and the chromatic value of each first pixel point in the background region, the electronic device may also perform chromatic stretching on the chromatic value of each first pixel point. After stretching the chromaticity, the electronic device may superimpose the luminance component value of each first pixel point in the background region and the stretched chromaticity value to obtain a component superimposed value of each first pixel point in the background region. The chroma stretching variation of the chroma value of any one first pixel point and the brightness component value of the first pixel point can be in positive correlation. That is, the brighter the first pixel point is, the more the chromaticity is stretched.
For example, the electronic device may superimpose the luminance component value of each first pixel point in the background region and the stretched chrominance value, and may calculate the luminance component value and the stretched chrominance value according to the following formula:
Figure BDA0003222726160000192
where y3 may represent a component superimposed value on which a luma component value is superimposed after stretching to obtain a chroma value, x3 may represent a luma component value,
Figure BDA0003222726160000193
the chroma value after stretching may be represented.
930. And determining the corresponding chromaticity weight of each first pixel point in the background region according to the component superposition value of each first pixel point in the background region.
Optionally, the electronic device may directly determine the component superposition value of each first pixel point in the background region as the chromaticity weight corresponding to each first pixel point in the background region.
Optionally, the step 920 executed by the electronic device may further include the following steps:
the electronic equipment stretches the brightness of the brightness component value of each first pixel point in the background area to obtain the stretched brightness component value of each first pixel point in the background area.
The brightness stretching variation of each first pixel point in the background area and the brightness component value of each first pixel point in the background area before stretching are in a positive correlation relationship. That is, the first pixel in the background region is not stretched
For example, the electronic device may perform luminance stretching on the luminance component value of each first pixel point in the background area through the foregoing formula (6). That is to say, after performing luminance stretching on each first pixel, the electronic device may obtain a highlight mask of the first image, where the highlight mask may at least include a luminance component value of each stretched first pixel in the background area, and may be used to indicate the highlight area in the background area.
For each first pixel point in the background region, the electronic device may compare the component superposition value of the first pixel point with the stretched luminance component value of the first pixel point, and determine the maximum value obtained by the comparison as the chromaticity weight corresponding to the first pixel point.
The electronic equipment determines the maximum value obtained by comparison as the corresponding chromaticity weight of the first pixel point, so that the corresponding chromaticity weight of the first pixel point with higher chromaticity and/or higher brightness can be enhanced, and the light spots can spread more colors, thereby enhancing the light spot color in the blurring effect.
As can be seen, in the foregoing embodiment, the chromaticity weight corresponding to the first pixel point with a higher chromaticity component value is higher, and the chromaticity weight corresponding to the first pixel point in the highlight area is also higher, so that the light spot with higher brightness can be diffused into more colors, which is beneficial to enhancing the light spot color in the blurring effect.
In some embodiments, after obtaining the diffusion weight corresponding to each first pixel point of the background region from the weight map and before performing the point diffusion processing on the first pixel point by using the corresponding diffusion weight, the electronic device may map the diffusion weight corresponding to the first pixel point according to the weight mapping curve to obtain the diffusion weight after the first pixel point is mapped.
The electronic equipment can utilize the diffusion weight after the first pixel point mapping to perform point diffusion processing on the first pixel point. That is to say, the electronic device may calculate the target pixel value of each second pixel point in the diffusion range corresponding to the first pixel point according to the pixel value of the first pixel point in the first image, the diffusion weight after the first pixel point is mapped, and the first aperture kernel. And the electronic equipment can calculate the diffusion times of each pixel point in the diffusion range corresponding to the first pixel point according to the diffusion weight after the first pixel point is mapped and the first aperture kernel.
If the diffusion weights include a luminance weight and a chrominance weight, the electronic device may map the luminance weight and the chrominance weight according to a weight mapping curve, respectively.
The weight mapping curve can be obtained by fusing the first curve and the second curve. The first curve may be a global smooth curve, and the second curve may be a local smooth curve stretched around the first luminance level. The first brightness level may be set according to actual service requirements, and may be any one of 90% to 100% brightness levels, for example.
Illustratively, the first curve may be represented by the following equation:
Figure BDA0003222726160000211
where y4 may represent the diffusion weight after mapping and x4 may represent the diffusion weight before mapping.
As can be seen from equation (9), the first curve is a globally smooth transition curve between 0 and 255.
Illustratively, the second curve may be represented by the following equation:
Figure BDA0003222726160000212
where y5 may represent the diffusion weight after mapping and x5 may represent the diffusion weight before mapping.
As can be seen from equation (10), the second curve is a locally smooth sigmoid curve centered at 95%.
The electronic device may fuse the first curve and the second curve to obtain a weight mapping curve.
For example, the weight mapping curve may be represented by the following formula:
y6 ═ α · y4+ (1- α) · y5, α ∈ [0.2,0.4 ]; formula (11)
Where y6 may represent a weight mapping curve, y4 may represent a first curve shown in equation (9), and y5 may represent a second curve shown in equation (10).
Referring to fig. 10, fig. 10 is a diagram illustrating an example of a first curve, a second curve and a weight mapping curve according to an embodiment of the disclosure. As shown in fig. 10, the first curve 1010 is globally smooth, the second curve 1020 is locally smooth, and the weight mapping curve 1030 merges the characteristics of the first curve 1010 and the second curve 1020, so that the light source energy distribution is met, the transition is natural, and the brightness steps can be formed by staggering at different brightness levels, so that the diffusion weights respectively corresponding to the first pixels with brightness component values at different brightness levels have obvious differences, and the optimization of the overall hierarchical transparency of the light spot in the blurring effect is facilitated.
In some embodiments, the blurring effect requires further enhancement of the spot, and may require further enhancement of the diffusion weights corresponding to the highlighted pixel points. The electronic equipment can optimize the weight mapping curve and improve the weight of the latter half section of the weight mapping curve.
For example, the optimized weight mapping curve can be represented by the following formula:
Figure BDA0003222726160000221
y7 is y7 (L- (255-x)), x ∈ (255-L, 255); formula (13)
Y7 can represent the optimized weight mapping curve, Brightness can represent the input Brightness parameter, and can be set according to the actual service requirement, and Aperture size can represent the virtual radius of the first pixel point.
And the optimized weight mapping curve is used for diffusion weight mapping, so that the light spots are further lightened in a blurring effect, and the integral level permeability of the light spots is improved.
Referring to fig. 11, fig. 11 is a flowchart illustrating an image processing method according to an embodiment, where the method is applicable to the electronic device. As shown in fig. 11, the method may include the steps of:
1110. and generating a blurring strength graph corresponding to the first image, and generating a background mask of the first image according to the blurring strength graph.
The electronic device can generate a blurring strength graph corresponding to the first image according to the depth information of the full first image, and can generate a background mask by using the blurring strength graph corresponding to the first image.
1120. The first image is pre-processed.
The electronic device pre-processes the first image, which may include: reducing the brightness component value of each first pixel point in the first image background area; and improving the chromaticity component value of each first pixel point in the highlight area in the first image background area so as to enhance the color of the highlight area.
And generating a weight map corresponding to the first image.
The electronic device can calculate the diffusion weight of each pixel point according to the brightness component value and the chromaticity component value of each pixel point of the first image full map, so that a weight map corresponding to the first image is generated.
The weight map may include a luminance weight map and a chrominance weight map. The luminance weight map can be obtained by calculating the luminance component values of all the pixel points of the first image overall map, and the chrominance weight map can be obtained by calculating the luminance component values and the chrominance component values of all the pixel points of the first image overall map.
When generating the brightness weight map, the electronic device may delete energy of the large-area highlight region, thereby reducing the brightness weight of the first pixel point in the large-area highlight region.
In generating the chromaticity weight map, the electronic device may increase the chromaticity weight corresponding to the first pixel point having the higher luminance component value and/or the higher chromaticity component value.
And traversing each first pixel point in the first image background area indicated by the background mask, and acquiring the diffusion weight and the blurring radius corresponding to the first pixel point from the weight graph and the blurring strength graph corresponding to the first image.
1150. And mapping the diffusion weight corresponding to the first pixel point according to the weight mapping curve to obtain the diffusion weight after the first pixel point is mapped.
1160. And performing point diffusion processing on each first pixel point in the background area of the first image indicated by the background mask according to the aperture shape parameter, the blurring strength graph and the weight graph corresponding to the first image, and obtaining a diffusion result corresponding to each first pixel point in the background area.
1170. And superposing the diffusion results corresponding to the first pixel points in the background area, and blurring the background area according to the superposition results.
Therefore, in the foregoing embodiment, the electronic device may generate the background mask to indicate the range of the background area that needs to be subjected to the point diffusion processing, so that the point diffusion processing may be performed on each first pixel point included in the background area, so that the bright points form bright light spots, and the dark points form dim diffuse circles. The electronic device can also preprocess the first image before the point diffusion processing, reduce the brightness of the background area to improve the overall contrast of the first image, and simultaneously enhance the color of the highlight area to improve the saturation of the light source. The diffusion weight is an important factor influencing point diffusion processing, when a brightness weight graph is generated, the electronic equipment deletes the energy of a large-area high-light area, reduces the brightness weight of a first pixel point in the large-area high-light area, and can avoid light spot stacking overexposure; when generating the chromaticity weight map, the electronic device may increase the chromaticity weight corresponding to the first pixel point having a higher luminance component value and/or chromaticity component value, so that the light spot may spread more colors. The electronic device can further map the diffusion weight acquired from the weight map through a weight mapping curve which accords with the energy distribution of the light source and is transitive to nature, so that the integral layer transparency of the light spots in the blurring effect can be optimized. The electronic equipment can also simulate the aperture shapes of different lenses according to different aperture shape parameters, so that the shapes of light spots in the blurring effect are more variable.
Referring to fig. 12, fig. 12 is a schematic structural diagram of an image processing apparatus according to an embodiment. The image processing apparatus shown in fig. 12 can be applied to any of the electronic devices described above, and as shown in fig. 12, the image processing apparatus 1200 can include: an acquisition module 1210, a diffusion module 1220, and a blurring module 1230.
An obtaining module 1210, configured to obtain aperture shape parameters, and obtain a blurring strength map and a weight map corresponding to the first image, where the blurring strength map at least includes blurring radii corresponding to first pixels in a background region of the first image, and the weight map at least includes diffusion weights corresponding to the first pixels in the background region of the first image;
the diffusion module 1220 is configured to perform point diffusion processing on each first pixel point in the background region of the first image according to the aperture shape parameter, the blurring strength map, and the weight map, so as to obtain a diffusion result corresponding to each first pixel point;
and a superimposing and blurring module 1230, configured to superimpose the diffusion results corresponding to each first pixel point in the background region, and blur the background region according to the superimposed results.
In one embodiment, the diffusion result corresponding to each first pixel point in the background region includes: the diffusion range corresponding to the first pixel point, the target pixel value of each second pixel point in the diffusion range, and the diffusion times of each second pixel point in the diffusion range; the diffusion range corresponding to the first pixel point takes the first pixel point as the center, the radius of a circumscribed circle of the diffusion range is the virtual radius corresponding to the first pixel point, and the outer contour of the diffusion range is the aperture shape indicated by the aperture shape parameters.
In one embodiment, the diffusion module 1220 may include: the diffusion unit comprises a first acquisition unit, a first diffusion unit and a second diffusion unit.
A first obtaining unit, configured to obtain, for each first pixel point in a background region of the first image, a blurring radius corresponding to the first pixel point from the blurring strength map, and obtain, from the weight map, a diffusion weight corresponding to the first pixel point; determining a first aperture core corresponding to the first pixel point according to the virtual radius corresponding to the first pixel point and the aperture shape parameter, wherein the first aperture core is used for indicating the diffusion range of the first pixel point;
the first diffusion unit can be used for calculating the target pixel value of each second pixel point in the diffusion range corresponding to the first pixel point according to the pixel value of the first pixel point in the first image, the diffusion weight corresponding to the first pixel point and the first aperture kernel;
and the second diffusion unit can be used for calculating the diffusion times of each second pixel point in the diffusion range corresponding to the first pixel point according to the diffusion weight corresponding to the first pixel point and the first aperture kernel.
In one embodiment, the overlay blurring module 1230 may include: an overlapping unit and a blurring unit.
The superposition unit can be used for superposing the target pixel value of each second pixel point contained in the diffusion range corresponding to each first pixel point in the background region to the pixel value superposition graph and superposing the diffusion times of each second pixel point contained in the diffusion range corresponding to each first pixel point in the background region to the time statistic graph;
and the blurring unit can be used for blurring each first pixel point according to the pixel value of each first pixel point in the superposed pixel value superposed image in the background area and the total diffusion number of each first pixel point in the superposed times statistical image.
In one embodiment, the image processing apparatus 1200 may further include: a background determination module.
The background determining module is used for removing the portrait area in the first image according to the blurring strength graph of the first image; and removing the hair region in the first image from which the portrait region is removed according to the hair mask to obtain a background mask of the first image.
Optionally, the background determination module may be further configured to, before removing the portrait area in the first image, erode the blurring strength map to expand the range of the portrait area. Wherein, the erosion radius corresponding to each pixel point in the blurring strength graph can be in positive correlation with the blurring radius corresponding to each pixel point. And the image area removing method can be further used for removing the image area in the first image according to the blurring strength graph after the image area is expanded.
Optionally, the background determining module may be further configured to, before the hair region is removed, count an area of the hair outline in the hair mask, calculate the expansion radius according to an area ratio of the hair outline, and increase the expansion radius according to the blurring radius of the hair outline pixel points; and, further, for expanding the hair mask according to the expansion radius to expand the hair region; and, it can also be used to increase the confidence of the pixels in the hair mask to increase the number of pixels identified as hair, thereby further expanding the hair region. The area of the outer contour of the hair region can indicate the number of pixel points contained in the outer contour of the hair region, and the increase of the virtual radius and the expansion radius can be in positive correlation.
The background determining module can also be used for subtracting the hair area after the hair mask expansion in the first image with the portrait area removed to obtain the background mask of the first image.
In one embodiment, the image processing apparatus 1200 may further include: and a preprocessing module.
And the preprocessing module can be used for reducing the brightness component value of each first pixel point in the first image background area. The reduction amount of the brightness component value of the first pixel point can be in positive correlation with the brightness of the first image shooting scene.
For example, the preprocessing module may be configured to calculate a Gamma parameter for dimming the brightness, and perform curve mapping stretching on the brightness component value of each first pixel based on the calculated Gamma parameter to reduce the brightness component value of the first pixel.
The preprocessing module can be further configured to increase a chromaticity component value of each first pixel point included in a highlight area in the first image background area, so as to perform color enhancement on the highlight area and increase a color of the light source center. The amount of increase of the chromaticity component of the first pixel point in the highlight area can be in a negative correlation with the brightness of the first image shooting scene.
Illustratively, the pre-processing module may be operable to blur chroma component values of the first image; calculating a Gamma parameter for color enhancement according to the saturation parameter, and performing color enhancement on the first image whole image by using the calculated Gamma parameter to obtain a color-enhanced first image; and, calculating a highlight mask for the first image; and according to the first image after color enhancement and the high photomask of the first image, performing color enhancement on the high light area of the first image.
In one embodiment, the obtaining module 1210 may include: a luminance weight generating unit and a chrominance weight generating unit.
And the brightness weight generating unit can be used for calculating the brightness weight corresponding to each first pixel point in the background area according to the brightness component value of each first pixel point in the background area of the first image so as to obtain a brightness weight map. The luminance weight map may include luminance weights corresponding to the first pixel points in the background region, and the luminance weights corresponding to the first pixel points may be in a positive correlation with the luminance component values of the first pixel points.
And the chrominance weight generating unit is used for calculating the chrominance weight corresponding to each first pixel point in the background area according to the luminance component value and the chrominance component value of each first pixel point in the background area of the first image. The chromaticity weight map may include chromaticity weights corresponding to the first pixel points in the background region, the chromaticity weights corresponding to the first pixel points may be in a positive correlation with the luminance component values of the first pixel points, and the chromaticity weights corresponding to the first pixel points may also be in a positive correlation with the chromaticity component values of the first pixel points.
In one embodiment, the luminance weight generating unit is further configured to set a luminance weight corresponding to a first pixel point in the background region of the first image, where the luminance component value is smaller than the luminance threshold, to zero; and calculating a brightness weight corresponding to the first pixel point included in the highlight region according to the highlight region area included in the background region of the first image.
Optionally, the luminance weight generating unit may be further configured to set, when the area of the highlight region is smaller than the area threshold, the luminance weight corresponding to each first pixel point included in the highlight region to be zero; and/or when the area of the highlight area is larger than or equal to the area threshold, reducing the brightness component value of each first pixel point contained in the highlight area according to the area of the highlight area, and taking the reduced brightness component value as the brightness weight corresponding to each first pixel point contained in the highlight area. The reduction amount of the brightness component value of each first pixel point in the highlight area is in positive correlation with the area of the highlight area.
In one embodiment, the luminance weight generating unit may be further configured to, when the background region may include two or more mutually disconnected highlight regions, first expand each highlight region to connect adjacent highlight regions.
In an embodiment, the chrominance weight generating unit may be further configured to calculate a chrominance value of each first pixel point according to the chrominance component value of each first pixel point in the background region; superposing the brightness component value of each first pixel point in the background area and the chroma component value of each first pixel point in the background area to obtain a component superposition value of each first pixel point in the background area; and determining the chroma weight corresponding to each first pixel point in the background region according to the component superposition value of each first pixel point in the background region.
Optionally, the chrominance weight generating unit may be further configured to perform chrominance stretching on the chrominance value of each first pixel point before superimposing the luminance component value of each first pixel point in the background region and the chrominance value of each first pixel point in the background region; and after the chroma stretching is carried out, superposing the brightness component value of each first pixel point in the background area with the stretched chroma value to obtain the component superposition value of each first pixel point in the background area.
The chroma stretching variation of the chroma value of any one first pixel point and the brightness component value of the first pixel point can be in positive correlation.
Optionally, the chrominance weight generating unit may be further configured to perform luminance stretching on the luminance component value of each first pixel in the background region, so as to obtain a stretched luminance component value of each first pixel in the background region; and comparing the component superposition value of the first pixel point with the stretched brightness component value of the first pixel point, and determining the maximum value obtained by comparison as the chromaticity weight corresponding to the first pixel point.
The brightness stretching variation of each first pixel point in the background area and the brightness component value of each first pixel point in the background area before stretching are in a positive correlation relationship.
In one embodiment, the image processing apparatus 1200 may further include: and a weight mapping module.
And the weight mapping module can be used for mapping the diffusion weight corresponding to the first pixel point according to the weight mapping curve to obtain the diffusion weight after the first pixel point is mapped. The weight mapping curve may be obtained by fusing a first curve and a second curve, the first curve may be a global smooth curve, and the second curve may be a local smooth curve stretched around the first brightness level.
Correspondingly, the diffusion module 1220 includes a first diffusion unit, and is further configured to calculate a target pixel value of each second pixel point in the diffusion range corresponding to the first pixel point according to the pixel value of the first pixel point in the first image, the diffusion weight after the first pixel point is mapped, and the first aperture kernel.
The diffusion module 1220 includes a second diffusion unit, and is further configured to calculate the number of diffusion times of each pixel point in the diffusion range corresponding to the first pixel point according to the diffusion weight after the first pixel point is mapped and the first aperture kernel.
Therefore, by implementing the image processing apparatus in the foregoing embodiment, the dot diffusion processing may be performed on each first pixel point included in the background area, so that the bright dots form highlight light spots, and the dark dots form dim diffusion circles. And preprocessing the first image before point diffusion processing, reducing the brightness of a background area to improve the overall contrast of the first image, and simultaneously performing color enhancement on a highlight area to improve the saturation at a light source. And the energy of the large-area highlight region can be deleted, the brightness weight of the first pixel point in the large-area highlight region can be reduced, the light spot stacking overexposure can be avoided, and the chromaticity weight corresponding to the first pixel point with the higher brightness component value and/or chromaticity component value can be increased, so that more colors can be diffused by the light spot. And mapping the diffusion weight obtained from the weight map by using a weight mapping curve which accords with the energy distribution of the light source and is transitive to nature, so that the overall layered transparency of the light spots in the blurring effect can be optimized. And the aperture shapes of different lenses can be simulated according to different aperture shape parameters, so that the shapes of light spots in the blurring effect are more variable. In conclusion, the image processing device can make the blurring effect more real and natural.
Referring to fig. 13, fig. 13 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure. As shown in fig. 13, the electronic device 1300 may include:
a memory 1310 in which executable program code is stored;
a processor 1320 coupled with the memory 1310;
the processor 1320 calls the executable program code stored in the memory 1310 to execute any one of the image processing methods disclosed in the embodiments of the present application.
It should be noted that the mobile terminal shown in fig. 13 may further include components, which are not shown, such as a power supply, an input key, a camera, a speaker, a screen, an RF circuit, a Wi-Fi module, a bluetooth module, and a sensor, which are not described in detail in this embodiment.
The embodiment of the application discloses a computer readable storage medium which stores a computer program, wherein the computer program realizes any one of the image processing methods disclosed in the embodiment of the application when being executed by a processor.
An embodiment of the present application discloses a computer program product, which includes a non-transitory computer-readable storage medium storing a computer program, and the computer program is operable to cause a computer to execute any one of the image processing methods disclosed in the embodiment of the present application.
It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present application. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. Those skilled in the art should also appreciate that the embodiments described in this specification are all alternative embodiments and that the acts and modules involved are not necessarily required for this application.
In various embodiments of the present application, it should be understood that the size of the serial number of each process described above does not mean that the execution sequence is necessarily sequential, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation on the implementation process of the embodiments of the present application.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated units, if implemented as software functional units and sold or used as a stand-alone product, may be stored in a computer accessible memory. Based on such understanding, the technical solution of the present application, which is a part of or contributes to the prior art in essence, or all or part of the technical solution, may be embodied in the form of a software product, stored in a memory, including several requests for causing a computer device (which may be a personal computer, a server, a network device, or the like, and may specifically be a processor in the computer device) to execute part or all of the steps of the above-described method of the embodiments of the present application.
It will be understood by those skilled in the art that all or part of the steps in the methods of the embodiments described above may be implemented by hardware instructions of a program, and the program may be stored in a computer-readable storage medium, where the storage medium includes Read-Only Memory (ROM), Random Access Memory (RAM), Programmable Read-Only Memory (PROM), Erasable Programmable Read-Only Memory (EPROM), One-time Programmable Read-Only Memory (OTPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Compact Disc Read-Only Memory (CD-ROM), or other Memory, such as a magnetic disk, or a combination thereof, A tape memory, or any other medium readable by a computer that can be used to carry or store data.
The foregoing detailed description has provided a detailed description of an image processing method, an image processing apparatus, an electronic device, and a storage medium, which are disclosed in the embodiments of the present application, and the principles and implementations of the present application are described herein using specific examples, and the descriptions of the foregoing embodiments are only used to help understand the method and the core idea of the present application. Meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (15)

1. An image processing method, characterized in that the method comprises:
acquiring aperture shape parameters, and acquiring a blurring strength graph and a weight graph corresponding to a first image, wherein the blurring strength graph at least comprises blurring radiuses corresponding to all first pixel points in a background region of the first image, and the weight graph at least comprises diffusion weights corresponding to all first pixel points in the background region of the first image;
performing point diffusion processing on each first pixel point in the background area of the first image according to the aperture shape parameter, the blurring strength graph and the weight graph to obtain a diffusion result corresponding to each first pixel point;
and superposing the diffusion results corresponding to the first pixel points, and blurring the background area according to the superposition results.
2. The method of claim 1, wherein the diffusion result corresponding to each first pixel point in the background region comprises: the diffusion range corresponding to the first pixel point, the target pixel value of each second pixel point in the diffusion range, and the diffusion times of each second pixel point in the diffusion range; the diffusion range corresponding to the first pixel point takes the first pixel point as the center, the radius of a circumscribed circle of the diffusion range is the virtual radius corresponding to the first pixel point, and the outline of the diffusion range is the aperture shape indicated by the aperture shape parameters.
3. The method according to claim 2, wherein the performing a point diffusion process on each first pixel point in the background region of the first image according to the aperture shape parameter, the blurring strength map, and the weight map to obtain a diffusion result corresponding to each first pixel point comprises:
aiming at each first pixel point in the background area of the first image, acquiring a blurring radius corresponding to the first pixel point from the blurring strength graph, and acquiring a diffusion weight corresponding to the first pixel point from the weight graph;
determining a first aperture core corresponding to the first pixel point according to the virtual radius corresponding to the first pixel point and the aperture shape parameter, wherein the first aperture core is used for indicating the diffusion range of the first pixel point;
calculating a target pixel value of each second pixel point in a diffusion range corresponding to the first pixel point according to the pixel value of the first pixel point in the first image, the diffusion weight corresponding to the first pixel point and the first aperture kernel;
and calculating the diffusion times of each second pixel point in the diffusion range corresponding to the first pixel point according to the diffusion weight corresponding to the first pixel point and the first aperture kernel.
4. The method according to claim 2 or 3, wherein the superimposing the diffusion results corresponding to the first pixel points in the background region and blurring the background region according to the superimposed results comprises:
superposing the target pixel value of each second pixel point contained in the diffusion range corresponding to each first pixel point in the background region to a pixel value superposition graph, and superposing the diffusion times of each second pixel point contained in the diffusion range corresponding to each first pixel point in the background region to a time statistic graph;
and blurring each first pixel point according to the pixel value of each first pixel point in the superposed pixel value superposed image in the background area and the total diffusion number of each first pixel point in the superposed times statistical image.
5. The method of claim 1, wherein the weight map comprises: a luminance weight map and a chrominance weight map; the diffusion weight comprises a luminance weight and a chrominance weight; and acquiring a weight map corresponding to the first image, wherein the weight map comprises the following steps:
calculating the brightness weight corresponding to each first pixel point according to the brightness component value of each first pixel point in the background area of the first image to obtain a brightness weight graph; the brightness component value of each first pixel point in the background region of the first image is in positive correlation with the corresponding brightness weight;
calculating the corresponding chromaticity weight of each first pixel point according to the brightness component value and the chromaticity component value of each first pixel point; the chroma component value and the brightness component value of each first pixel point in the background area of the first image are in positive correlation with the corresponding color weight.
6. The method according to claim 5, wherein said calculating the luminance weight corresponding to each first pixel point according to the luminance component value of each first pixel point in the background region of the first image comprises:
calculating a brightness weight corresponding to a first pixel point contained in a highlight area according to the area of the highlight area contained in a background area of the first image; the brightness component values of the first pixel points contained in the highlight area are all larger than or equal to the brightness threshold value.
7. The method according to claim 6, wherein calculating the brightness weight corresponding to a first pixel point included in the highlight region according to an area of the highlight region included in the background region of the first image comprises:
if the area of the highlight area is smaller than an area threshold, setting the brightness weight corresponding to each first pixel point contained in the highlight area to be zero; and/or the presence of a gas in the gas,
if the area of the highlight area is larger than or equal to the area threshold, reducing the brightness component value of each first pixel point contained in the highlight area according to the area of the highlight area, and taking the reduced brightness component value as the brightness weight corresponding to each first pixel point contained in the highlight area; the reduction amount of the brightness component value of each first pixel point included in the highlight area is in positive correlation with the area of the highlight area.
8. The method according to claim 5, wherein said calculating the chroma weight corresponding to each first pixel point according to the luma component value and the chroma component value of each first pixel point comprises:
calculating the chroma value of each first pixel point according to the chroma component value of each first pixel point in the background area;
superposing the brightness component value of each first pixel point in the background area and the chromatic value of each first pixel point to obtain a component superposition value of each first pixel point in the background area;
and determining the chroma weight corresponding to each first pixel point in the background region according to the component superposition value of each first pixel point in the background region.
9. The method of claim 8, wherein the determining the chroma weight corresponding to each first pixel point in the background region according to the component superposition value of each first pixel point in the background region comprises:
performing brightness stretching on the brightness component value of each first pixel point in the background area to obtain the stretched brightness component value of each first pixel point in the background area; the brightness stretching variation of each first pixel point in the background area and the brightness component value of each first pixel point in the background area before stretching are in positive correlation;
and aiming at each first pixel point in the background region, comparing the component superposition value of the first pixel point with the stretched brightness component value of the first pixel point, and determining the maximum value obtained by comparison as the chromaticity weight corresponding to the first pixel point.
10. The method of claim 8, wherein before said superimposing the luminance component value of each first pixel in the background region with the chrominance value of each first pixel to obtain the component superimposed value of each first pixel in the background region, the method further comprises:
carrying out chroma stretching on the chroma value of each first pixel point in the background area to obtain the stretched chroma value of each first pixel point in the background area; the chromaticity stretching variation of each first pixel point in the background area and the stretching brightness component value of each first pixel point in the background area are in positive correlation;
and the superimposing the brightness component value of each first pixel point in the background region and the chromatic value of each first pixel point to obtain the component superimposed value of each first pixel point in the background region, including:
and superposing the brightness component value of each first pixel point in the background area and the stretched chromatic value of each first pixel point to obtain a component superposition value of each first pixel point in the background area.
11. The method of claim 3, wherein after obtaining the diffusion weight corresponding to the first pixel point from the weight map, the method further comprises:
mapping the diffusion weight corresponding to the first pixel point according to a weight mapping curve to obtain the diffusion weight after the first pixel point is mapped; the weight mapping curve is obtained by fusing a first curve and a second curve, wherein the first curve is a global smooth curve, and the second curve is a local smooth curve which is stretched by taking a first brightness level as a center;
and calculating the target pixel value of each second pixel point in the diffusion range corresponding to the first pixel point according to the pixel value of the first pixel point in the first image, the diffusion weight corresponding to the first pixel point and the first aperture kernel, including:
calculating a target pixel value of each second pixel point in a diffusion range corresponding to the first pixel point according to the pixel value of the first pixel point in the first image, the diffusion weight mapped by the first pixel point and the first aperture kernel;
and calculating the diffusion times of each pixel point in the diffusion range corresponding to the first pixel point according to the diffusion weight corresponding to the first pixel point and the first aperture kernel, wherein the calculation comprises the following steps:
and calculating the diffusion times of each pixel point in the diffusion range corresponding to the first pixel point according to the diffusion weight after the first pixel point is mapped and the first aperture kernel.
12. The method of claim 3, further comprising:
acquiring a plurality of aperture images with different aperture shapes;
and zooming each aperture image in the plurality of aperture images for a plurality of times to obtain a plurality of aperture kernels corresponding to each aperture shape, wherein each aperture kernel corresponds to a virtual radius.
13. An image processing apparatus characterized by comprising:
the acquiring module is used for acquiring aperture shape parameters and acquiring a blurring strength graph and a weight graph corresponding to a first image, wherein the blurring strength graph at least comprises blurring radiuses corresponding to all first pixel points in a background region of the first image, and the weight graph at least comprises diffusion weights corresponding to all first pixel points in the background region of the first image;
the diffusion module is used for performing point diffusion processing on each first pixel point in the background area of the first image according to the aperture shape parameter, the blurring strength graph and the weight graph to obtain a diffusion result corresponding to each first pixel point;
and the superposition blurring module is used for superposing the diffusion results corresponding to the first pixel points in the background area and blurring the background area according to the superposition results.
14. An electronic device comprising a memory and a processor, the memory having stored thereon a computer program that, when executed by the processor, causes the processor to implement the method of any one of claims 1 to 12.
15. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the method according to any one of claims 1 to 12.
CN202110962242.9A 2021-08-20 2021-08-20 Image processing method, device, electronic equipment and storage medium Active CN113709365B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110962242.9A CN113709365B (en) 2021-08-20 2021-08-20 Image processing method, device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110962242.9A CN113709365B (en) 2021-08-20 2021-08-20 Image processing method, device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113709365A true CN113709365A (en) 2021-11-26
CN113709365B CN113709365B (en) 2023-05-02

Family

ID=78653932

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110962242.9A Active CN113709365B (en) 2021-08-20 2021-08-20 Image processing method, device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113709365B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108156378A (en) * 2017-12-27 2018-06-12 努比亚技术有限公司 Photographic method, mobile terminal and computer readable storage medium
CN108230234A (en) * 2017-05-19 2018-06-29 深圳市商汤科技有限公司 Image virtualization processing method, device, storage medium and electronic equipment

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108230234A (en) * 2017-05-19 2018-06-29 深圳市商汤科技有限公司 Image virtualization processing method, device, storage medium and electronic equipment
CN108156378A (en) * 2017-12-27 2018-06-12 努比亚技术有限公司 Photographic method, mobile terminal and computer readable storage medium

Also Published As

Publication number Publication date
CN113709365B (en) 2023-05-02

Similar Documents

Publication Publication Date Title
CN111402135B (en) Image processing method, device, electronic equipment and computer readable storage medium
CN111028189B (en) Image processing method, device, storage medium and electronic equipment
EP3609177B1 (en) Control method, control apparatus, imaging device, and electronic device
EP3496383A1 (en) Image processing method, apparatus and device
CN106791471B (en) Image optimization method, image optimization device and terminal
WO2018176925A1 (en) Hdr image generation method and apparatus
CN111028190A (en) Image processing method, image processing device, storage medium and electronic equipment
CN107395991B (en) Image synthesis method, image synthesis device, computer-readable storage medium and computer equipment
CN113888437A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN110602467A (en) Image noise reduction method and device, storage medium and electronic equipment
CN108259770B (en) Image processing method, image processing device, storage medium and electronic equipment
CN108156369B (en) Image processing method and device
KR20120016476A (en) Image processing method and image processing apparatus
CN108364275B (en) Image fusion method and device, electronic equipment and medium
CN108717691B (en) Image fusion method and device, electronic equipment and medium
CN113313661A (en) Image fusion method and device, electronic equipment and computer readable storage medium
CN113313626A (en) Image processing method, image processing device, electronic equipment and storage medium
EP3836532A1 (en) Control method and apparatus, electronic device, and computer readable storage medium
CN112634183A (en) Image processing method and device
CN110942427A (en) Image noise reduction method and device, equipment and storage medium
CN114418879A (en) Image processing method, image processing device, electronic equipment and storage medium
CN110740266B (en) Image frame selection method and device, storage medium and electronic equipment
CN113673474B (en) Image processing method, device, electronic equipment and computer readable storage medium
CN112822413B (en) Shooting preview method, shooting preview device, terminal and computer readable storage medium
CN113793257A (en) Image processing method and device, electronic equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant