WO2024007948A1 - 频闪图像处理方法、装置、电子设备和可读存储介质 - Google Patents
频闪图像处理方法、装置、电子设备和可读存储介质 Download PDFInfo
- Publication number
- WO2024007948A1 WO2024007948A1 PCT/CN2023/103904 CN2023103904W WO2024007948A1 WO 2024007948 A1 WO2024007948 A1 WO 2024007948A1 CN 2023103904 W CN2023103904 W CN 2023103904W WO 2024007948 A1 WO2024007948 A1 WO 2024007948A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- mask
- low
- frequency
- frequency image
- Prior art date
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 28
- 230000000875 corresponding effect Effects 0.000 claims abstract description 65
- 238000000034 method Methods 0.000 claims abstract description 55
- 238000000926 separation method Methods 0.000 claims abstract description 16
- 238000001914 filtration Methods 0.000 claims abstract description 12
- 230000002596 correlated effect Effects 0.000 claims abstract description 8
- 238000004891 communication Methods 0.000 claims description 7
- 238000004590 computer program Methods 0.000 claims description 3
- 230000008569 process Effects 0.000 description 30
- 230000008030 elimination Effects 0.000 description 12
- 238000003379 elimination reaction Methods 0.000 description 12
- 230000006870 function Effects 0.000 description 10
- 230000000694 effects Effects 0.000 description 8
- 238000010586 diagram Methods 0.000 description 7
- 238000005516 engineering process Methods 0.000 description 5
- 230000009467 reduction Effects 0.000 description 5
- 230000001360 synchronised effect Effects 0.000 description 5
- 230000003044 adaptive effect Effects 0.000 description 4
- 230000001629 suppression Effects 0.000 description 3
- 101100248200 Arabidopsis thaliana RGGB gene Proteins 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 2
- 230000003190 augmentative effect Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 241000699670 Mus sp. Species 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000007599 discharging Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000002441 reversible effect Effects 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4038—Image mosaicing, e.g. composing plane images from plane sub-images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10141—Special mode during image acquisition
- G06T2207/10144—Varying exposure
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20004—Adaptive image processing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20024—Filtering details
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20224—Image subtraction
Definitions
- the present application belongs to the field of image processing technology, and specifically relates to a stroboscopic image processing method, device, electronic equipment and readable storage medium.
- the flash frequency of the current strobe light can be estimated through hardware, and the shutter time can be set to 1 or 2 times the strobe cycle to avoid banding in the captured pictures.
- a deep learning raw image file (RAW) domain image processing scheme can also be used to perform banding elimination processing on the entire image.
- banding elimination is performed on the entire image.
- the banding stripe area is brighter and darker, it has greater noise. If the banding is eliminated on the entire image, it is easy to expose the noise in the brighter and darker areas after banding elimination and lose the quality of the original image.
- the purpose of the embodiments of the present application is to provide a stroboscopic image processing method, device, electronic equipment and readable storage medium, which can reduce the noise exposure in the brightness and dark areas in the process of eliminating banding stripes in the image, and reduce the original It reduces the loss of picture quality and has a good debanding effect.
- embodiments of the present application provide a stroboscopic image processing method, which method includes:
- Obtaining a first image captured under a stroboscopic light source the first image includes a color band, and the first image is an original RAW domain image;
- a target mask is determined, wherein the mask value in the target mask is negatively correlated with the brightness and color band degree of the area corresponding to the mask value in the first low-frequency image.
- the filter Filter the color bands in the first low-frequency image according to the target mask to obtain a second low-frequency image, wherein the RAW domain value in the second low-frequency image located in the first area is larger than that of the first low-frequency image.
- the RAW domain value located in the second area in the low-frequency image is large.
- the first area includes the area where the color band is located and does not include the area where the dark area is located.
- the dark area is an area where the brightness value is less than the preset brightness threshold.
- the third area One area corresponds to the second area;
- the second low-frequency image and the first high-frequency image are superimposed to obtain an output image.
- a stroboscopic image processing device which includes:
- An acquisition module configured to acquire a first image captured under a stroboscopic light source, where the first image includes a color band, and the first image is an original RAW domain image;
- a first processing module configured to separate high- and low-frequency images on the first image to obtain a first high-frequency image and a first low-frequency image
- Determining module configured to determine a target mask according to the first low-frequency image, wherein the mask value in the target mask and the brightness of the area corresponding to the mask value in the first low-frequency image are equal to and the degree of color banding are negatively correlated;
- a second processing module configured to filter the color bands in the first low-frequency image according to the target mask to obtain a second low-frequency image, wherein the RAW elements located in the first area in the second low-frequency image
- the domain value is larger than the RAW domain value located in the second area in the first low-frequency image.
- the first area includes the area where the color band is located and does not include the area where the dark area is located.
- the dark area has a brightness value less than the preset brightness.
- a threshold area the first area corresponds to the second area;
- the third processing module is used to superimpose the second low-frequency image and the first high-frequency image to obtain an output image.
- inventions of the present application provide an electronic device.
- the electronic device includes a processor and a memory.
- the memory stores programs or instructions that can be run on the processor.
- the programs or instructions are processed by the processor.
- the processor is executed, the steps of the method described in the first aspect are implemented.
- embodiments of the present application provide a readable storage medium.
- Programs or instructions are stored on the readable storage medium.
- the steps of the method described in the first aspect are implemented. .
- inventions of the present application provide a chip.
- the chip includes a processor and a communication interface.
- the communication interface is coupled to the processor.
- the processor is used to run programs or instructions to implement the first aspect. the method described.
- embodiments of the present application provide a computer program product, the program product is stored in a storage medium, and the program product is executed by at least one processor to implement the method as described in the first aspect.
- the image when a first image with a color band is captured under a stroboscopic light source, the image can be divided into a high-frequency image and a low-frequency image, and based on the target mask, the first low-frequency image can be The color band is filtered.
- This filtering process can be understood as the process of removing the color band/stroboscopic stripes in the banding area (that is, the area where the banding is located) in the first low-frequency image, that is, removing the stroboscopic stripes ( debanding) process.
- the debanding intensity on the black areas and shadow areas of the original image itself can be reduced, thereby Reduce the noise exposure caused to the black areas and shadow areas of the original image during the debanding process, and reduce the loss of the original image quality.
- the RAW domain value of the banding area is increased during the debanding process, the RAW domain value of the banding area (i.e.
- the first area) in the second low-frequency image is relatively increased, that is, the RAW domain value of the banding area in the high-frequency image Relatively reduced, in this way, when the second low-frequency image with the RAW domain value of the banding area increased and the high-frequency image with the RAW domain value of the banding area unchanged are superimposed, the high-frequency image will be in the banding area.
- the proportion is reduced, so that the final output image can not only maintain the clarity of the non-banding area through the high-frequency image, but also maintain the debanding effect of the banding area through the debanded low-frequency image, thereby achieving the change of high and low frequency information according to the banding intensity.
- the ratio between them achieves the purpose of adaptive noise reduction.
- Figure 1 is a flow chart of a stroboscopic image processing method provided by an embodiment of the present application
- Figure 2 is a schematic diagram of the data processing process of a stroboscopic image processing method provided by an embodiment of the present application
- FIG. 3 is a schematic diagram of the processing process of the mean RAW image
- Figure 4 is a schematic structural diagram of the preset color band recognition model
- Figure 5 is a flow chart of another stroboscopic image processing method provided by an embodiment of the present application.
- Figure 6 is a schematic structural diagram of a stroboscopic image processing device provided by an embodiment of the present application.
- Figure 7 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
- FIG. 8 is a schematic diagram of the hardware structure of an electronic device provided by an embodiment of the present application.
- first, second, etc. in the description and claims of this application are used to distinguish Similar objects, rather than describing a specific order or sequence. It is to be understood that the figures so used are interchangeable under appropriate circumstances so that the embodiments of the present application can be practiced in orders other than those illustrated or described herein, and that "first,”"second,” etc. are distinguished Objects are usually of one type, and the number of objects is not limited. For example, the first object can be one or multiple.
- “and/or” in the description and claims indicates at least one of the connected objects, and the character “/" generally indicates that the related objects are in an "or” relationship.
- the execution subject may be an electronic device, such as a mobile phone or a camera, or the execution subject may be a tablet computer, a notebook computer, a handheld computer, or a vehicle-mounted electronic device.
- Equipment, mobile Internet Device (MID), augmented reality (AR)/virtual reality (VR) equipment, robots, wearable devices, ultra-mobile personal computer (UMPC) ), netbook or personal digital assistant (personal digital assistant, PDA), etc. can also be a server, network attached storage (Network Attached Storage, NAS), personal computer (personal computer, PC), television (television, TV), teller machine Or self-service machines, etc., which are not specifically limited in the embodiments of this application.
- the stroboscopic image processing method may include the following steps:
- Step 101 Obtain a first image captured under a stroboscopic light source, the first image includes a color band, and the first image is an original RAW domain image.
- the first image may include one image or at least two images. Wherein, when the first image includes at least two images, at least one of the images has a color band and the color band needs to be removed.
- the image is an image with color bands, and the stroboscopic image processing method provided by the embodiment of the present application is used to remove the color bands in the image.
- Step 102 Perform high- and low-frequency image separation on the first image to obtain a first high-frequency image and a first low-frequency image.
- the first low-frequency image can be understood as the lower-definition partial image content of the first image
- the first high-frequency image can be the higher-definition partial image content of the first image
- performing high- and low-frequency image separation on the first image to obtain a first high-frequency image and a first low-frequency image includes:
- the first low-frequency image is obtained by performing mean filtering on the first image (RAW domain image). For example, performing mean filtering with a kernel size of 5 ⁇ 5 on the first image to obtain the first low-frequency image.
- the first low-frequency image is obtained by performing mean filtering on the first image (RAW domain image).
- a low-frequency image may be a single-channel image in the RAW domain.
- the above-mentioned first high-frequency image is a RAW domain image obtained by subtracting the first low-frequency image from the first image.
- the ratio of the first low-frequency image to the first high-frequency image may be 1:1, 1:2, or 2:1, etc., which is not specifically limited here.
- Step 103 Determine a target mask according to the first low-frequency image, wherein the mask value in the target mask is the negative brightness and color of the area corresponding to the mask value in the first low-frequency image. With degree of negative correlation.
- the dimensions of the target mask and the first low-frequency image may be the same, and each mask value in the target mask may correspond to a pixel in the first low-frequency image.
- the mask value in the target mask The area corresponding to the mask value in the first low-frequency image may be a pixel in the first low-frequency image corresponding to the mask value in the target mask.
- the dimensions of the target mask and the first low-frequency image may also be different.
- the mask value in the target mask and the area corresponding to the mask value in the first low-frequency image may be the At least one pixel or part of a pixel in the first low-frequency image corresponding to the mask value in the target mask is not specifically limited here.
- an image without banding can be obtained by shooting at a shutter frequency corresponding to the flash frequency of the stroboscopic light source, and then comparing the image with an image with banding.
- the target mask can be jointly determined based on the dark area and the banding area, so that the mask value in the target mask is inversely related to the brightness of the corresponding area.
- the banding area and/or dark area in the image with banding can also be obtained through other methods, for example: using an artificial intelligence (Artificial Intelligence, AI) network model to analyze and obtain the banding area in the image with banding
- AI Artificial Intelligence
- Step 104 Filter the color bands in the first low-frequency image according to the target mask to obtain a second low-frequency image, wherein the RAW domain values located in the first area in the second low-frequency image are larger than The RAW domain value located in the second area in the first low-frequency image is large, the first area includes the area where the color band is located and does not include the area where the dark area is located, and the dark area is an area where the brightness value is less than the preset brightness threshold, The first area corresponds to the second area.
- the above-mentioned first area may be a banding area in the second low-frequency image
- the above-mentioned second area may be a banding area in the first low-frequency image
- the correspondence between the first area and the second area may be:
- the position of the first area in the second low-frequency image is the same as the position of the second area in the first low-frequency image. For example: if the first area is If the low-frequency image overlaps with the second low-frequency image, then the first area overlaps with the second area.
- Step 105 Superimpose the second low-frequency image and the first high-frequency image to obtain an output image.
- the color band (non-dark area) in the first low-frequency image is processed using a target mask.
- the RAW domain value of the area where the color band is located except the dark area in the second low-frequency image will be increased.
- the second low-frequency image and the first high-frequency image are superimposed, , can reduce the proportion of the RAW domain value of the first high-frequency image in the area where the color band is located except the dark area, so that the resulting output image retains more of the debanding color in the second low-frequency image.
- band area to reduce the impact of banding in the first high-frequency image on the output image.
- the output image can retain the high-frequency content in the first high-frequency image (for example: it is necessary to use high frequency to shoot moving objects)
- the clarity of the pixel content corresponding to the moving object in the output image can be improved, and the proportion of the RAW domain value of the second low-frequency image in the debanding area can be increased to achieve the debanding effect).
- the output image obtained above can be a RAW domain image, or an image in other formats such as RGB.
- the device that performs the stroboscopic image processing method provided by the embodiment of the present application is a mobile phone, a camera, etc. with a camera.
- the device can superimpose the second low-frequency image and the first high-frequency image to obtain the RAW image after the stroboscopic elimination, and then return the RAW image after the stroboscopic elimination to the camera chain.
- the RAW image is converted into an RGB image through Image Signal Processing (ISP) simulation, the RGB image is returned to the user through display and other methods.
- ISP Image Signal Processing
- the first image includes a first sub-image (for the convenience of explanation, the first sub-image is labeled input1 in the following embodiments) and a second sub-image (for the convenience of explanation, the first sub-image is labeled input1 in the following embodiments).
- the second sub-image is marked as input2) in Speed is related, and the second shutter frequency is related to the flash frequency of the stroboscopic light source.
- the second shutter frequency based on the flash frequency of the stroboscopic light source can be 1 or 2 times of the flash frequency, so that there are no banding stripes in the second sub-image.
- the flash frequency of light is usually 60Hz or 50Hz
- the second shutter frequency can be 1 or 2 times of 60Hz or 50Hz.
- the second shutter frequency The shutter speed corresponding to the shutter frequency can be 1/120(s) or 1/60(s).
- the flash frequency of the light can be obtained by detecting or receiving instructions.
- this second shutter frequency usually cannot meet the needs of fast-moving objects to be photographed.
- the moving object when the moving object moves too fast, the moving object will be blurred, and the second shutter frequency cannot meet the need for clear shooting of fast-moving objects.
- the above-mentioned first shutter frequency may be a shutter frequency that matches the movement speed of the photographed object. Based on the first shutter frequency, the moving photographed object can be clearly photographed, that is, the photographed object in the first sub-image is clear.
- the first sub-image and the second sub-image can be separated into high and low frequencies respectively, and the two low-frequency images obtained by separating the first sub-image and the second sub-image are used to Determine the target mask, and then use the target mask to perform debanding processing on the low-frequency image separated from the first sub-image, and deband the high-frequency image (with a clear subject) separated from the first sub-image.
- the final low-frequency images are superimposed to obtain the output image.
- performing high- and low-frequency image separation on the first image to obtain a first high-frequency image and a first low-frequency image includes:
- the first sub-image (input1) is separated into high- and low-frequency images to obtain the first sub-high-frequency image (for ease of explanation, the first sub-high-frequency image is marked as input1_g in the following embodiments) and the first sub-low-frequency image ( For ease of explanation, the first sub-low-frequency image is marked as input1_d) in the following embodiments;
- the second sub-image (input2) is separated into high- and low-frequency images to obtain the second sub-high-frequency image (for ease of explanation, the second sub-high-frequency image is marked as input2_g in the following embodiments) and the second sub-low-frequency image ( For ease of explanation, the second sub-low-frequency image is marked as input2_d) in the following embodiments;
- the first high-frequency image includes the first sub-high-frequency image and the second sub-high-frequency image
- the first low-frequency image includes the first sub-low-frequency image and the second sub-low-frequency image.
- the banding area can be obtained by comparing the second sub-high-frequency image without banding with the first sub-low-frequency image with banding.
- the data processing process of the embodiment of the present application may include: determining the banding area mask based on input1_d and input2_d ⁇ dark area mask and banding area based on the darker area in input1_d Domain mask is used to determine the target mask ⁇ use the target mask to perform debanding processing on input1_d ⁇ superimpose input1_g and debanded input1_d to obtain the output image (hereinafter marked as output).
- determining the target mask according to the first low-frequency image includes:
- the average RAW image after stacking the four types of data (such as GR, G, B, Gr) in the first sub-low-frequency image input1_d on the channel, determine the mean RAW image corresponding to the first sub-low-frequency image.
- the average RAW image is marked as input1_d_avg in the following embodiments).
- the average RAW image includes average-processed G channel data, average-processed B-channel data, and average-processed R channel data.
- the G channel data is determined based on the average of the Gr channel data and Gb channel data in the first sub-low-frequency image;
- the average RAW image is normalized to obtain a normalized RAW image (for convenience of explanation, the normalized RAW image will be labeled is RAW image (x));
- the first mask is determined. Note that in the following embodiments, the first mask is marked as z), where the smaller the mask value in the first mask, the lower the brightness of the corresponding area;
- the second mask is adjusted according to the first mask to obtain a target mask (for convenience of explanation, the target mask is marked as mask z in the following embodiments).
- the step of determining the target mask according to the first low-frequency image may include: Next process:
- input1_d is a single-channel image in the RAW domain, and there will be a specific Bayer format. In this embodiment, it is assumed to be the GBRG format. As shown in Figure 3, first perform a pack operation on the RAW image, stack four different types of data (Gr, Gb, R, B) together on the channel, and then perform a pack operation on the Gr and Gb channels. Average the values to obtain the average G channel. Finally, average the values of the three GBR channels to obtain the average RAW image (input1_d_avg). The size of input1_d_avg is W/2 ⁇ H/2.
- x is the input normalized image
- s is the first preset threshold (that is, the dark area suppression threshold)
- clamp() represents numerical interception.
- the normalized RAW image can be intercepted to between 0-1.
- the pixel value in x is greater than s, it means that the brightness of the current position does not need to be suppressed and is a normal brightness range; when the pixel value in x is lower than s, it means that the position is in a dark area, and subsequent banding in this area Elimination requires reducing the degree of elimination, and the brightness change is smooth and continuous. The smaller the value, the darker the point.
- the exponential value can be calculated multiple times to expand the numerical gradient interval and range, so that the value range of the final first mask z is spread between 0 and 1.
- the RAW domain values of the two G channels in the second mask can also be averaged.
- the relative proportions of Gr and Gb can be kept consistent and avoid subsequent banding elimination. This causes a grid phenomenon in RAW images.
- Determining the target mask according to the first low-frequency image further includes:
- the adjusting the second mask according to the first mask to obtain the target mask includes:
- the updated second mask is adjusted according to the first mask to obtain a target mask.
- mask[Gr] represents the first mask value
- mask[Gb] represents the second mask value
- the RAW domain values of the two G channels in the second mask can be averaged to avoid the grid phenomenon in the RAW image caused by subsequent banding elimination.
- adjusting the second mask according to the first mask to obtain a target mask includes:
- the mask value corresponding to the target area in the second mask is reduced to obtain a target mask, wherein in the first mask, the mask value corresponding to the target area is less than or equal to the second preset threshold, and in the second mask, the mask value corresponding to the target area is less than or equal to the third preset threshold.
- input1_d can be divided by mask z pixel by pixel to obtain the second low-frequency image after eliminating banding (for ease of explanation, in the following embodiments, the second low-frequency image is labeled input1 dz ).
- the pixel value of the banding area in input1 dz can be enlarged.
- the low-frequency image input1_d is a single-channel image in the RAW domain, and there will be a specific Bayer format, such as: GBRG, RGGB, BGGR, etc.
- the stroboscopic image processing method provided by the embodiment of the present application is explained by taking the RAW domain image in GBRG format as an example. However, in implementation, the stroboscopic image processing method provided by the embodiment of the present application can also be applied to RGGB, BGGR, etc.
- the target mask can be obtained using a similar process to the above-mentioned RAW domain image in GBRG format, which is not specifically limited here.
- the image when a first image with a color band is captured under a stroboscopic light source, the image can be divided into a high-frequency image and a low-frequency image, and based on the target mask, the first low-frequency image can be The color band is filtered.
- the filtering process by adjusting the mask value of the target mask, the debanding intensity on the black areas and shadow areas of the original image itself can be reduced, thereby reducing the debanding process on the black areas and shadows of the original image itself. Noise exposure caused by the area, and reduce the loss of original image quality.
- the RAW domain value of the banding area in the second low-frequency image is relatively increased, that is, the RAW domain value of the banding area in the high-frequency image is relatively reduced.
- the proportion of the high-frequency image in the banding area will be reduced, thereby causing
- the final output image can not only maintain the clarity of the non-banding area through the high-frequency image, but also maintain the debanding effect of the banding area through the debanded low-frequency image, thereby changing the ratio between high and low-frequency information according to the banding intensity, achieving The purpose of adaptive noise reduction.
- Figure 5 is another stroboscopic image processing method provided by an embodiment of the present application. As shown in Figure 5, the method may include the following steps:
- Step 501 Obtain the RAW image with strobe taken by the user
- Step 502 Separate high and low frequencies on the RAW image to obtain low-frequency images and high-frequency images;
- Step 503 Determine the dark area mask z according to the low-frequency image
- Step 504 Use the preset color band recognition model to identify the low-frequency image and determine the banding stripe mask
- Step 505 Adjust the banding stripe mask according to the dark area mask z to obtain the target mask mask z ;
- Step 506 Use mask z to perform banding elimination on the low-frequency image to obtain input1 dz ;
- Step 507 Add high-frequency image information to input1 dz to obtain the output image output;
- Step 508 Output.
- the noise in the banding area can be suppressed after de-stroboscopic, and at the same time, in the process of determining and banding the area based on the low-frequency image, the negative impact of the image detail attributes on the de-stroboscopic algorithm can be avoided.
- making the predicted strobe area smoother and more uniform secondly, based on the dark area of the original image, a smooth dark area mask is output, and the degree of dark area removal is automatically controlled based on the mask area predicted by the banding elimination network, retaining the original image quality Also reduces noise.
- the entire process can be implemented in the camera link, which can automatically eliminate strobing and take clear photos when the user shoots.
- the execution subject may be a stroboscopic image processing device.
- a stroboscopic image processing device performing a stroboscopic image processing method is used as an example to illustrate the stroboscopic image processing device provided by the embodiment of the present application.
- the stroboscopic image processing device 600 provided by the embodiment of the present application may include the following modules:
- the acquisition module 601 is used to acquire the first image captured under the stroboscopic light source, the first image includes a color band, and the first image is an original RAW domain image;
- the first processing module 602 is used to separate high- and low-frequency images on the first image to obtain a first high-frequency image and a first low-frequency image;
- Determining module 603 configured to determine a target mask according to the first low-frequency image, wherein the mask value in the target mask and the brightness of the area corresponding to the mask value in the first low-frequency image and a negative correlation with the degree of color banding;
- the second processing module 604 is configured to filter the color bands in the first low-frequency image according to the target mask to obtain a second low-frequency image, wherein the second low-frequency image located in the first area
- the RAW domain value is greater than the RAW domain value located in the second area in the first low-frequency image.
- the first area includes the area where the color band is located and does not include the area where the dark area is located.
- the dark area is an area with a brightness value less than a preset brightness threshold, and the first area corresponds to the second area;
- the third processing module 605 is used to superimpose the second low-frequency image and the first high-frequency image to obtain an output image.
- the first processing module 602 includes:
- a first processing unit configured to perform mean filtering on the first image to obtain a first low-frequency image
- the second processing unit is configured to perform image removal processing on the first image based on the first low-frequency image to obtain a first high-frequency image.
- the first image includes a first sub-image and a second sub-image
- the first sub-image is captured based on a first shutter frequency
- the second sub-image is captured based on a second shutter frequency
- the The first shutter frequency is related to the movement speed of the photographed object
- the second shutter frequency is related to the flash frequency of the stroboscopic light source
- the first processing module 602 includes:
- a third processing unit configured to separate high- and low-frequency images on the first sub-image to obtain a first sub-high-frequency image and a first sub-low-frequency image;
- a fourth processing unit configured to separate high- and low-frequency images on the second sub-image to obtain a second sub-high-frequency image and a second sub-low-frequency image;
- the first high-frequency image includes the first sub-high-frequency image and the second sub-high-frequency image
- the first low-frequency image includes the first sub-low-frequency image and the second sub-low-frequency image.
- determination module 603 includes:
- the first determination unit is configured to determine the mean RAW image corresponding to the first sub-low-frequency image based on the RAW image after stacking the four types of data in the first sub-low-frequency image on the channel.
- the mean RAW image The figure includes the G channel data after average processing, the B channel data after average processing, and the R channel data after average processing.
- the G channel data is based on the Gr channel data and Gb channel in the first sub-low-frequency image.
- the average value of the data is determined;
- the fifth processing unit is used to normalize the average RAW image according to the black level value and the maximum number of digits of the average RAW image to obtain a normalized RAW image;
- a second determination unit configured to determine a first mask based on the area in the normalized RAW image where the pixel value is less than or equal to the first preset threshold, wherein the mask value in the first mask exceeds Small means the brightness of the corresponding area is lower;
- a sixth processing unit configured to splice the first sub-low-frequency image and the second sub-low-frequency image on a channel to obtain an intermediate image
- the seventh processing unit is used to input the intermediate image into a preset color band recognition model to obtain a second mask, wherein the smaller the mask value in the second mask, the heavier the color band in the corresponding area. ;
- An adjustment unit configured to adjust the second mask according to the first mask to obtain a target mask.
- the adjustment unit is specifically used for:
- the mask value corresponding to the target area in the second mask is reduced to obtain a target mask, wherein in the first mask, the mask value corresponding to the target area is less than or equal to the second preset threshold, and in the second mask, the mask value corresponding to the target area is less than or equal to the third preset threshold.
- determination module 603 includes:
- a first acquisition unit configured to acquire the first mask value corresponding to the Gr data channel in the second mask
- a second acquisition unit configured to acquire a second mask value corresponding to the Gb data channel in the second mask
- An update unit configured to update the mask value in the second mask corresponding to the Gr data channel and the Gr data channel according to the average value of the first mask value and the second mask value.
- the adjustment unit is specifically used for:
- the updated second mask is adjusted according to the first mask to obtain a target mask.
- the stroboscopic image processing device in the embodiment of the present application may be an electronic device or a component in the electronic device, such as an integrated circuit or chip.
- the electronic device may be a terminal or a terminal other devices than the terminal.
- the electronic device may be a mobile phone, a tablet computer, a notebook computer, a handheld computer, a vehicle-mounted electronic device, a mobile Internet device (MID), or augmented reality (AR)/virtual reality (VR).
- MID mobile Internet device
- AR augmented reality
- VR virtual reality
- UMPC ultra-mobile personal computers
- PDA personal digital assistants
- NAS Network Attached Storage
- PC personal computer
- TV television
- teller machine or self-service machine etc.
- the stroboscopic image processing device in the embodiment of the present application may be a device with an operating system.
- the operating system can be an Android operating system, an ios operating system, or other possible operating systems, which are not specifically limited in the embodiments of this application.
- the stroboscopic image processing device provided by the embodiments of the present application can implement each process implemented by the method embodiments shown in Figures 1 to 5, and can achieve the same beneficial effects. To avoid duplication, they will not be described again here.
- this embodiment of the present application also provides an electronic device 700, including a processor 701 and a memory 702.
- the memory 702 stores programs or instructions that can be run on the processor 701.
- each step of the above strobe image processing method embodiment is implemented, and the same technical effect can be achieved. To avoid repetition, the details will not be described here.
- the electronic devices in the embodiments of the present application include the above-mentioned mobile electronic devices and non-mobile electronic devices.
- FIG. 8 is a schematic diagram of the hardware structure of an electronic device implementing an embodiment of the present application.
- the electronic device 800 includes but is not limited to: radio frequency unit 801, network module 802, audio output unit 803, input unit 804, sensor 805, display unit 806, user input unit 807, interface unit 808, memory 809, processor 810, etc. part.
- the electronic device 800 may also include a power supply (such as a battery) that supplies power to various components.
- the power supply may be logically connected to the processor 810 through a power management system, thereby Functions such as charging, discharging, and power consumption management are implemented through the power management system.
- the structure of the electronic device shown in Figure 8 does not constitute a limitation on the electronic device.
- the electronic device may include more or less components than shown in the figure, or combine certain components, or arrange different components, which will not be described again here. .
- the input unit 804 is used to obtain the first image captured under the stroboscopic light source, the first image includes a color band, and the first image is an original RAW domain image;
- Processor 810 configured to perform high- and low-frequency image separation on the first image to obtain a first high-frequency image and a first low-frequency image;
- the processor 810 is further configured to determine a target mask according to the first low-frequency image, wherein the mask value in the target mask corresponds to the mask value in the first low-frequency image. Brightness and color banding degree are negatively correlated;
- the processor 810 is further configured to filter the color bands in the first low-frequency image according to the target mask to obtain a second low-frequency image, wherein the RAW elements located in the first area in the second low-frequency image
- the domain value is larger than the RAW domain value located in the second area in the first low-frequency image.
- the first area includes the area where the color band is located and does not include the area where the dark area is located.
- the dark area has a brightness value less than the preset brightness.
- a threshold area the first area corresponds to the second area;
- the processor 810 is also configured to superimpose the second low-frequency image and the first high-frequency image to obtain an output image.
- the processor 810 performs high- and low-frequency image separation on the first image to obtain the first high-frequency image and the first low-frequency image, including:
- the first image includes a first sub-image and a second sub-image
- the first sub-image is captured based on a first shutter frequency
- the second sub-image is captured based on a second shutter frequency
- the The first shutter frequency is related to the movement speed of the photographed object
- the second shutter frequency is related to the flash frequency of the stroboscopic light source
- the processor 810 performs high- and low-frequency image separation on the first image to obtain the first high-frequency image and the first low-frequency image, including:
- the first high-frequency image includes the first sub-high-frequency image and the second sub-high-frequency image
- the first low-frequency image includes the first sub-low-frequency image and the second sub-low-frequency image.
- the processor 810 determines the target mask according to the first low-frequency image, including:
- the average RAW image corresponding to the first sub-low-frequency image is determined, and the average RAW image includes the average RAW image.
- G channel data, B channel data after average processing and R channel data after average processing the G channel data is determined based on the average value of Gr channel data and Gb channel data in the first sub-low frequency image;
- the second mask is adjusted according to the first mask to obtain a target mask.
- the processor 810 performs adjusting the second mask according to the first mask, Get the target mask, including:
- the mask value corresponding to the target area in the second mask is reduced to obtain a target mask, wherein in the first mask, the mask value corresponding to the target area is less than or equal to the second preset threshold, and in the second mask, the mask value corresponding to the target area is less than or equal to the third preset threshold.
- the step of determining the target mask according to the first low-frequency image performed by the processor 810 further includes:
- the adjusting the second mask according to the first mask to obtain the target mask includes:
- the updated second mask is adjusted according to the first mask to obtain a target mask.
- the electronic device 800 provided by the embodiment of the present application can implement the process executed by each module in the stroboscopic image processing device as shown in Figure 6, and can achieve the same beneficial effects. To avoid duplication, the details will not be described again.
- the input unit 804 may include a graphics processor (Graphics Processing Unit, GPU) 8041 and a microphone 8042.
- the graphics processor 8041 is responsible for the image capture device (GPU) in the video capture mode or the image capture mode. Process the image data of still pictures or videos obtained by cameras (such as cameras).
- the display unit 806 may include a display panel 8061, which may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like.
- the user input unit 807 includes a touch panel 8071 and at least one of other input devices 8072 .
- Touch panel 8071 also known as touch screen.
- the touch panel 8071 may include two parts: a touch detection device and a touch controller.
- Other input devices 8072 may include but are not limited to physical keyboards, function keys (such as volume control keys, switch keys, etc.), trackballs, mice, and joysticks, which will not be described again here.
- Memory 809 can be used to store software programs as well as various data.
- Memory 809 may mainly include A first storage area for storing programs or instructions and a second storage area for storing data, wherein the first storage area can store an operating system, an application program or instructions required for at least one function (such as a sound playback function, an image playback function, etc.) wait.
- memory 809 may include volatile memory or non-volatile memory, or memory 809 may include both volatile and non-volatile memory.
- the non-volatile memory can be read-only memory (Read-Only Memory, ROM), programmable read-only memory (Programmable ROM, PROM), erasable programmable read-only memory (Erasable PROM, EPROM), electrically removable memory.
- Volatile memory can be random access memory (Random Access Memory, RAM), static random access memory (Static RAM, SRAM), dynamic random access memory (Dynamic RAM, DRAM), synchronous dynamic random access memory (Synchronous DRAM, SDRAM), double data rate synchronous dynamic random access memory (Double Data Rate SDRAM, DDRSDRAM), enhanced synchronous dynamic random access memory (Enhanced SDRAM, ESDRAM), synchronous link dynamic random access memory (Synch link DRAM) , SLDRAM) and direct memory bus random access memory (Direct Rambus RAM, DRRAM).
- RAM Random Access Memory
- SRAM static random access memory
- DRAM dynamic random access memory
- synchronous dynamic random access memory Synchronous DRAM, SDRAM
- Double data rate synchronous dynamic random access memory Double Data Rate SDRAM, DDRSDRAM
- Enhanced SDRAM, ESDRAM synchronous link dynamic random access memory
- Synch link DRAM synchronous link dynamic random access memory
- SLDRAM direct memory bus random access memory
- the processor 810 may include one or more processing units; optionally, the processor 810 integrates an application processor and a modem processor, where the application processor mainly handles operations related to the operating system, user interface, application programs, etc., Modem processors mainly process wireless communication signals, such as baseband processors. It can be understood that the above modem processor may not be integrated into the processor 810.
- Embodiments of the present application also provide a readable storage medium.
- Programs or instructions are stored on the readable storage medium.
- the program or instructions are executed by a processor, each process of the above stroboscopic image processing method embodiment is implemented, and can To achieve the same technical effect, to avoid repetition, we will not repeat them here.
- the processor is the processor in the electronic device described in the above embodiment.
- the readable storage medium includes computer readable storage media, such as computer read-only memory ROM, random access memory RAM, magnetic disk or optical disk, etc.
- An embodiment of the present application further provides a chip, which includes a processor and a communication interface.
- the communication interface is coupled to the processor, and the processor is used to run programs or instructions to implement each process of the above stroboscopic image processing method embodiment, and can achieve the same technical effect. To avoid repetition, the details will not be described here.
- chips mentioned in the embodiments of this application may also be called system-on-chip, system-on-a-chip, system-on-a-chip or system-on-chip, etc.
- Embodiments of the present application provide a computer program product.
- the program product is stored in a storage medium.
- the program product is executed by at least one processor to implement each process of the above stroboscopic image processing method embodiment, and can achieve the same To avoid repetition, the technical effects will not be repeated here.
- the methods of the above embodiments can be implemented by means of software plus the necessary general hardware platform. Of course, it can also be implemented by hardware, but in many cases the former is better. implementation.
- the technical solution of the present application can be embodied in the form of a computer software product that is essentially or contributes to the existing technology.
- the computer software product is stored in a storage medium (such as ROM/RAM, disk , optical disk), including several instructions to cause an electronic device (which can be a mobile phone, computer, server, or network device, etc.) to execute the methods described in various embodiments of this application.
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
- Studio Devices (AREA)
- Stroboscope Apparatuses (AREA)
Abstract
本申请公开了一种频闪图像处理方法、装置、电子设备和可读存储介质,属于图像处理技术领域。方法包括获取频闪光源下拍摄得到的第一图像,第一图像为原始RAW域图像;对第一图像进行高低频图像分离,得到第一高频图像和第一低频图像;根据第一低频图像,确定目标掩码,目标掩码中的掩码值与该掩码值在第一低频图像中对应的区域的亮度以及色带程度负相关;根据目标掩码对第一低频图像中的色带进行过滤处理,得到第二低频图像,第二低频图像中位于第一区域内的RAW域数值比第一低频图像中位于第二区域内的RAW域数值大,第一区域和第二区域对应,且均包括色带所在区域且不包括暗区所在区域;将第二低频图像和第一高频图像进行叠加处理,得到输出图像。
Description
相关申请的交叉引用
本申请主张在2022年07月06日在中国提交的中国专利申请No.202210796573.4的优先权,其全部内容通过引用包含于此。
本申请属于图像处理技术领域,具体涉及一种频闪图像处理方法、装置、电子设备和可读存储介质。
在相关技术中,在受频闪灯光影响的拍摄环境下,容易拍摄出频闪图片,即图片中出现黑白条纹。
为了克服该问题,相关技术中可以通过硬件估计出当前频闪灯光的闪光频率,通过设置快门时间为频闪周期的1或2倍,以避免拍摄出的图片出现的色带(banding)现象。
但是,在拍摄运动的物体的时候,需要通过提升快门速度来提高拍摄图像的清晰度,而用于克服拍摄出的图片出现的banding现象的快门时间不能够满足拍摄运动物体的快门时间,当运动物体的运动速度过快时,会使图像中的运动物体模糊,无法满足快速运动物体的拍摄清晰需求。
此外,相关技术中还可以采用深度学习的原始图像文件(RAW)域图像处理方案,针对全图进行色带(banding)消除处理,该方案中是对全图进行banding消除。但是,banding条纹区域因为亮度教暗,所以其噪声较大,若对全图进行banding消除,容易使得banding消除后亮度教暗区域的噪声暴露及丢失原图画质。
综上,相关技术中的去闪屏的方法的性能较差。
发明内容
本申请实施例的目的是提供一种频闪图像处理方法、装置、电子设备和可读存储介质,能够在消除图像中的banding条纹的过程中,降低亮度教暗区域的噪声暴露,以及减少原图画质的丢失,从而具有良好的去频闪条纹(debanding)效果。
第一方面,本申请实施例提供了一种频闪图像处理方法,该方法包括:
获取频闪光源下拍摄得到的第一图像,所述第一图像包括色带,所述第一图像为原始RAW域图像;
对所述第一图像进行高低频图像分离,得到第一高频图像和第一低频图像;
根据所述第一低频图像,确定目标掩码,其中,所述目标掩码中的掩码值与所述掩码值在所述第一低频图像中对应的区域的亮度以及色带程度负相关;
根据所述目标掩码对所述第一低频图像中的色带进行过滤处理,得到第二低频图像,其中,所述第二低频图像中位于第一区域内的RAW域数值比所述第一低频图像中位于第二区域内的RAW域数值大,所述第一区域包括色带所在区域且不包括暗区所在区域,所述暗区为亮度值小于预设亮度阈值的区域,所述第一区域与所述第二区域对应;
将所述第二低频图像和所述第一高频图像进行叠加处理,得到输出图像。
第二方面,本申请实施例提供了一种频闪图像处理装置,该装置包括:
获取模块,用于获取频闪光源下拍摄得到的第一图像,所述第一图像包括色带,所述第一图像为原始RAW域图像;
第一处理模块,用于对所述第一图像进行高低频图像分离,得到第一高频图像和第一低频图像;
确定模块,用于根据所述第一低频图像,确定目标掩码,其中,所述目标掩码中的掩码值与所述掩码值在所述第一低频图像中对应的区域的亮度以
及色带程度负相关;
第二处理模块,用于根据所述目标掩码对所述第一低频图像中的色带进行过滤处理,得到第二低频图像,其中,所述第二低频图像中位于第一区域内的RAW域数值比所述第一低频图像中位于第二区域内的RAW域数值大,所述第一区域包括色带所在区域且不包括暗区所在区域,所述暗区为亮度值小于预设亮度阈值的区域,所述第一区域与所述第二区域对应;
第三处理模块,用于将所述第二低频图像和所述第一高频图像进行叠加处理,得到输出图像。
第三方面,本申请实施例提供了一种电子设备,该电子设备包括处理器和存储器,所述存储器存储可在所述处理器上运行的程序或指令,所述程序或指令被所述处理器执行时实现如第一方面所述的方法的步骤。
第四方面,本申请实施例提供了一种可读存储介质,所述可读存储介质上存储程序或指令,所述程序或指令被处理器执行时实现如第一方面所述的方法的步骤。
第五方面,本申请实施例提供了一种芯片,所述芯片包括处理器和通信接口,所述通信接口和所述处理器耦合,所述处理器用于运行程序或指令,实现如第一方面所述的方法。
第六方面,本申请实施例提供一种计算机程序产品,该程序产品被存储在存储介质中,该程序产品被至少一个处理器执行以实现如第一方面所述的方法。
在本申请实施例中,在频闪光源下拍摄到带有色带的第一图像时,可以将该图像分为高频图像和低频图像,并基于目标掩码对所述第一低频图像中的色带进行过滤处理,该过滤过程可以理解为对第一低频图像中的色带(banding)区域(即banding所在区域)中的色带/频闪条纹进行去除的过程,即去频闪条纹(debanding)过程。该debanding过程中,通过调节目标掩码的掩码值可以降低对原图本身的黑色区域和阴影区域的debanding力度,从而
减少debanding过程中对原图本身的黑色区域和阴影区域造成的噪声暴露,以及减少原图画质的丢失。此外,由于debanding过程中,增大了banding区域的RAW域数值,使得第二低频图像中banding区域(即第一区域)的RAW域数值相对增大,即高频图像中banding区域的RAW域数值相对减小,这样,在将banding区域的RAW域数值增大后的第二低频图像与banding区域的RAW域数值未改变的高频图像进行叠加处理时,就会使得高频图像在banding区域的占比降低,进而使得最终得到的输出图像既能够通过高频图像保持非banding区域的清晰度,又能够通过debanding后的低频图像保持banding区域的debanding效果,从而实现了根据banding强度改变高低频信息之间的比例,达到自适应降噪的目的。
图1是本申请实施例提供的一种频闪图像处理方法的流程图;
图2是本申请实施例提供的一种频闪图像处理方法的数据处理过程的示意图;
图3是均值RAW图的处理过程示意图;
图4是预设色带识别模型的结构示意图;
图5是本申请实施例提供的另一种频闪图像处理方法的流程图;
图6是本申请实施例提供的一种频闪图像处理装置的结构示意图;
图7是本申请实施例提供的一种电子设备的结构示意图;
图8是本申请实施例提供的一种电子设备的硬件结构示意图。
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员获得的所有其他实施例,都属于本申请保护的范围。
本申请的说明书和权利要求书中的术语“第一”、“第二”等是用于区别
类似的对象,而不用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便本申请的实施例能够以除了在这里图示或描述的那些以外的顺序实施,且“第一”、“第二”等所区分的对象通常为一类,并不限定对象的个数,例如第一对象可以是一个,也可以是多个。此外,说明书以及权利要求中“和/或”表示所连接对象的至少其中之一,字符“/”,一般表示前后关联对象是一种“或”的关系。
下面结合附图,通过具体的实施例及其应用场景对本申请实施例提供的频闪图像处理方法、装置、电子设备和可读存储介质进行详细地说明。
请参阅图1,本申请实施例提供的频闪图像处理方法,其执行主体可以是电子设备,例如:手机或者照相机等,或者,其执行主体可以是平板电脑、笔记本电脑、掌上电脑、车载电子设备、移动上网装置(Mobile Internet Device,MID)、增强现实(augmented reality,AR)/虚拟现实(virtual reality,VR)设备、机器人、可穿戴设备、超级移动个人计算机(ultra-mobile personal computer,UMPC)、上网本或者个人数字助理(personal digital assistant,PDA)等,还可以为服务器、网络附属存储器(Network Attached Storage,NAS)、个人计算机(personal computer,PC)、电视机(television,TV)、柜员机或者自助机等,本申请实施例不作具体限定。
如图1所示,本申请实施例提供的频闪图像处理方法可以包括以下步骤:
步骤101、获取频闪光源下拍摄得到的第一图像,所述第一图像包括色带,所述第一图像为原始RAW域图像。
在本实施例中,上述第一图像可以包括一张图像或至少两张图像。其中,在第一图像包括至少两张图像的情况下,其中至少一张为具有色带,且需要去除该色带的图像。在第一图像包括一张图像的情况下,该图像为具有色带的图像,本申请实施例提供的频闪图像处理方法用于去除该图像中的色带。
步骤102、对所述第一图像进行高低频图像分离,得到第一高频图像和第一低频图像。
其中,第一低频图像可以理解为第一图像中清晰程度较低的部分图像内容,第一高频图像可以为第一图像中清晰程度较高的部分图像内容。
可选地,所述对所述第一图像进行高低频图像分离,得到第一高频图像和第一低频图像,包括:
对所述第一图像进行均值滤波处理,得到第一低频图像;
基于所述第一低频图像对所述第一图像进行图像去除处理,得到第一高频图像。
本实施方式中,上述第一低频图像通过对第一图像(RAW域图像)进行均值滤波得到,例如:对第一图像进行核大小为5×5的均值滤波,得到第一低频图像,该第一低频图像可以是RAW域的单通道图像。而上述第一高频图像则是将第一图像减去第一低频图像得到的RAW域图像。
在实施中,上述第一低频图像与第一高频图像的占比可以是1:1,1:2或者2:1等等,在此不作具体限定。
步骤103、根据所述第一低频图像,确定目标掩码,其中,所述目标掩码中的掩码值与所述掩码值在所述第一低频图像中对应的区域的亮度负以及色带程度负相关。
在实施中,目标掩码与第一低频图像的维度可以相同,目标掩码中的每一个掩码值可以对应第一低频图像中的一个像素,此时,所述目标掩码中的掩码值与所述掩码值在所述第一低频图像中对应的区域,可以是所述目标掩码中的掩码值所对应的第一低频图像中的像素。
当然,目标掩码与第一低频图像的维度也可以不同,此时,所述目标掩码中的掩码值与所述掩码值在所述第一低频图像中对应的区域,可以是所述目标掩码中的掩码值所对应的第一低频图像中的至少一个像素或者一个像素的部分,在此不作具体限定。
在一种实施方式中,可以通过按照频闪光源的闪光频率对应的快门频率拍摄得到不带banding的图像,然后将该图像与具有banding的图像进行比较,
得到banding区域,以及banding区域中各个像素位置对应的banding程度;此外,还可以根据第一低频图像中的黑电平数值,确定第一低频图像中的暗区(即亮度小于或等于预设亮度阈值的区域)以及暗区中各个像素位置对应的亮度,这样,便可以根据该暗区和banding区域共同确定目标掩码,以使所述目标掩码中的掩码值与对应区域的亮度负相关,且与对应区域的色带程度负相关,这样,在利用目标掩码进行去频闪的过程中,能够实现自适应降噪,以及降低暗区的去频闪的程度的目的,提升了处理后的图像画质,避免了频闪消除程度过重的问题。
当然,在实际应用中,也可以通过其他方式来获取具有banding的图像中的banding区域和/或暗区,例如:通过人工智能(Artificial Intelligence,AI)网络模型来分析得到具有banding的图像中的banding区域和/或暗区,在此不作具体限定。
步骤104、根据所述目标掩码对所述第一低频图像中的色带进行过滤处理,得到第二低频图像,其中,所述第二低频图像中位于第一区域内的RAW域数值比所述第一低频图像中位于第二区域内的RAW域数值大,所述第一区域包括色带所在区域且不包括暗区所在区域,所述暗区为亮度值小于预设亮度阈值的区域,所述第一区域与所述第二区域对应。
在实施中,上述第一区域可以是第二低频图像中的banding区域,上述第二区域可以是第一低频图像中的banding区域,所述第一区域与所述第二区域对应可以是:在第一低频图像与第二低频图像尺寸相同的情况下,所述第一区域在第二低频图像中的位置与所述第二区域在第一低频图像中的位置相同,例如:若将第一低频图像与第二低频图像叠加,则第一区域与第二区域重叠。
步骤105、将所述第二低频图像和所述第一高频图像进行叠加处理,得到输出图像。
在实施中,在利用目标掩码对所述第一低频图像中的色带(非暗区)进
行过滤处理后,会使得到的第二低频图像中除了暗区外的色带所在区域的RAW域数值增大,这样,在将第二低频图像和所述第一高频图像进行叠加处理时,能够降低第一高频图像在该除了暗区外的色带所在区域的RAW域数值的占比,从而使得到的输出图像中,更多的保留第二低频图像中进行了debanding后的色带区域,减小第一高频图像中的banding对输出图像的影响,这样,能够使输出图像既能够保留第一高频图像中的高频内容(例如:需要采用高频对运动物体进行拍摄时,可以通过第一高频图像来提升输出图像中运动物体对应的像素内容的清晰程度,又能够通过提升第二低频图像在debanding区域的RAW域数值的占比,从而实现debanding效果)。
值得提出的是,上述得到的输出图像可以是RAW域图像,也可以是RGB等其他格式的图像,例如:假设执行本申请实施例提供的频闪图像处理方法的设备为手机、相机等具有摄像功能的设备,则该设备在将所述第二低频图像和所述第一高频图像进行叠加处理,得到消除频闪后的RAW图后,可以将该消除频闪后的RAW图返回相机链路,经过图像信号处理(Image Signal Processing,ISP)仿真将该RAW图转化为RGB图像后,通过显示等方式将RGB图返回给用户。
作为一种可选的实施方式,所述第一图像包括第一子图像(为了便于说明,以下实施例中将第一子图像标记为input1)和第二子图像(为了便于说明,以下实施例中将第二子图像标记为input2),所述第一子图像基于第一快门频率拍摄得到,所述第二子图像基于第二快门频率拍摄得到,所述第一快门频率与拍摄对象的运动速度相关,所述第二快门频率与所述频闪光源的闪光频率相关。
其中,基于与所述频闪光源的闪光频率相关的第二快门频率,可以是闪光频率的1倍或2倍,这样,可以使第二子图像中不具有banding条纹。
在实际应用中,灯光的闪光频率通常为60Hz或50Hz,则第二快门频率可以是60Hz或50Hz的1倍或2倍,例如:以闪光频率为60Hz为例,第二
快门频率对应的快门速度可以是1/120(s)或1/60(s)。
在实施中,可以通过检测或者接收指示等方式,获取灯光的闪光频率。
但是,该第二快门频率通常无法满足所要拍摄的快速运动物体的需求,例如:当运动物体的运动速度过快时,会出现运动物体模糊的现象,无法满足快速运动物体的拍摄清晰需求。基于此,上述第一快门频率可以是与拍摄对象的运动速度相匹配的快门频率,基于该第一快门频率可以清晰的拍摄运动的拍摄对象,即第一子图像中的拍摄对象清晰。
在拍摄得到第一子图像和第二子图像之后,可以分别对第一子图像和第二子图像进行高低频分离,并使用第一子图像和第二子图像分离得到的两个低频图像来确定目标掩码,然后,利用目标掩码对从第一子图像中分离得到的低频图像进行debanding处理,并将从第一子图像中分离得到的高频图像(具有清晰的拍摄对象)与debanding后的低频图像进行叠加,得到输出图像。
作为一种可选的实施方式,所述对所述第一图像进行高低频图像分离,得到第一高频图像和第一低频图像,包括:
对所述第一子图像(input1)进行高低频图像分离,得到第一子高频图像(为了便于说明,以下实施例中将第一子高频图像标记为input1_g)和第一子低频图像(为了便于说明,以下实施例中将第一子低频图像标记为input1_d);
对所述第二子图像(input2)进行高低频图像分离,得到第二子高频图像(为了便于说明,以下实施例中将第二子高频图像标记为input2_g)和第二子低频图像(为了便于说明,以下实施例中将第二子低频图像标记为input2_d);
其中,所述第一高频图像包括所述第一子高频图像和所述第二子高频图像,所述第一低频图像包括所述第一子低频图像和所述第二子低频图像。
本实施方式中,通过将不具有banding的第二子高频图像与具有banding的第一子低频图像进行对比,便可以得到banding区域。例如:如图2所示,本申请实施例的数据处理过程可以包括:根据input1_d和input2_d确定banding区域掩码→根据input1_d中的亮度较暗区域的暗区掩码和banding区
域掩码来确定目标掩码→采用目标掩码对input1_d进行debanding处理→将input1_g与debanding处理后的input1_d进行叠加处理,得到输出图像(以下标记为output)。
可选地,在所述第一低频图像的存储格式为GBRG的情况下,所述根据所述第一低频图像,确定目标掩码,包括:
根据对所述第一子低频图像input1_d中的四种类型的数据(如:GR、G、B、Gr)在通道上进行堆积处理后的RAW图,确定第一子低频图像对应的均值RAW图(为了便于说明,以下实施例中将均值RAW图标记为input1_d_avg),所述均值RAW图包括平均值处理后的G通道数据、平均值处理后的B通道数据和平均值处理后的R通道数据,所述G通道数据根据所述第一子低频图像中的Gr通道数据和Gb通道数据的平均值确定;
根据所述均值RAW图的黑电平数值和最大位数,对所述均值RAW图进行归一化处理,得到归一化RAW图(为了便于说明,以下实施例中将归一化RAW图标记为RAW图(x));
根据所述归一化RAW图中的像素值小于或者等于第一预设阈值(为了便于说明,以下实施例中将第一预设阈值标记为s)的区域,确定第一掩码(为了便于说明,以下实施例中将第一掩码标记为z),其中,所述第一掩码中的掩码值越小,表示对应区域的亮度越低;
将所述第一子低频图像和所述第二子低频图像在通道上进行拼接后,得到中间图像;
将所述中间图像输入预设色带识别模型,得到第二掩码(为了便于说明,以下实施例中将第二掩码标记为mask),其中,所述第二掩码中的掩码值越小,表示对应区域的色带越重;
根据所述第一掩码调整所述第二掩码,得到目标掩码(为了便于说明,以下实施例中将目标掩码标记为maskz)。
其中,所述根据所述第一低频图像,确定目标掩码的步骤,可以包括以
下过程:
1、假设input1和input2的RAW图的大小为W×H。input1_d为RAW域的单通道图像,会存在特定的Bayer格式,本实施例中假设为GBRG格式。如图3所示,首先对RAW图进行打包(pack)操作,将4个不同类型的数据(Gr、Gb、R、B)在通道上堆叠在一起,然后对其中的Gr和Gb两个通道求均值得到平均的G通道,最后将GBR三通道的数值求平均得到均值RAW图(input1_d_avg),input1_d_avg的大小为W/2×H/2。
2、将均值RAW图进行归一化处理,减去RAW图黑电平数值,然后除以该均值RAW图的最大位数(2bit位),得到取值为0~1之间的归一化后的RAW图(x)。
3、根据需要或拍摄场景设定暗区抑制的第一预设阈值s,对x进行以下计算:
其中,x为输入的归一化后的图像,s为第一预设阈值(即暗区抑制阈值),clamp()表示数值截取,通过上述公式计算,可以将归一化后的RAW图中的截取至0-1之间。其中,当x中的像素值大于s时,表示当前该位置的亮度无需抑制,是正常的亮度范围;当x中的像素值低于s时,表示该位置处于暗区,后续该区域的banding消除需要降低消除程度,并且亮度变化是平滑连续的,数值越小,表示该点越暗。
由于指数函数的负数区间均匀的过渡范围很小,可以采用多次求指数值扩大数值渐变区间和范围,使得最终的第一掩码z的取值范围遍布0~1之间。例如:可以采用以下公式进行n次重复的求指数数值,得到第一掩码z:
z=repeat(2y,n)
z=repeat(2y,n)
4、将input1_d和input2_d通过pack操作后,在通道上进行拼接,归一化并且调整尺寸(resize)到W×H大小后,输入预设色带识别模型,便可以获取该预设色带识别模型输出的第二掩码(banding条纹mask)。该预设色带
识别模型的模型结构可以如图4所示,该预设色带识别模型输出的掩码的大小可以是W×H×4,该第二掩码中的掩码值越小表示该位置的banding程度越重,反之掩码值越大表示该位置的banding程度越轻,若掩码值等于1则表示该位置不存在banding,在此之后,可以将该banding条纹mask resize到W/2×H/2×4大小。
5、根据z对预设色带识别模型得出的banding条纹mask进行暗区调整操作,得到目标掩码maskz。其中,z中的掩码值越小(接近0)表示该点越暗,越大(接近1)表示该点越亮,而mask中的掩码值越小(接近0)表示该位置banding越重,反之越大(接近1)表示程度越轻。
可选地,在上述过程5之前,还可以对第二掩码中的两个G通道的RAW域数值进行求平均处理,这样,可以使Gr和Gb的相对比例保持一致,避免后续banding消除后导致RAW图出现网格现象。
所述根据所述第一低频图像,确定目标掩码,还包括:
获取所述第二掩码中与Gr数据通道对应的第一掩码值;
获取所述第二掩码中与Gb数据通道对应的第二掩码值;
根据所述第一掩码值和所述第二掩码值的平均值,更新所述第二掩码中与所述Gr数据通道和所述Gr数据通道对应的掩码值;
所述根据所述第一掩码调整所述第二掩码,得到目标掩码,包括:
根据所述第一掩码调整更新后的所述第二掩码,得到目标掩码。
例如:通过以下公式来更新所述第二掩码中与所述Gr数据通道和所述Gr数据通道对应的掩码值:
mask[Gr]=mask[Gb]
mask[Gr]=mask[Gb]
其中,mask[Gr]表示第一掩码值,mask[Gb]表示第二掩码值。
本实施方式中,可以通过对第二掩码中的两个G通道的RAW域数值进行求平均处理,可以避免后续banding消除后导致RAW图出现网格现象。
可选地,所述根据所述第一掩码调整所述第二掩码,得到目标掩码,包括:
根据所述第一掩码减小所述第二掩码中与目标区域对应的掩码值,得到目标掩码,其中,所述第一掩码中,与所述目标区域对应的掩码值小于或者等于第二预设阈值,且所述第二掩码中,与所述目标区域对应的掩码值小于或者等于第三预设阈值。
例如:采用以下公式来确定目标掩码maskz:
maskz=1-(1-mask)×z
maskz=1-(1-mask)×z
上述公式中,通过将mask取反之后与z逐像素相乘再取反,相当于是banding区域并且位于暗区时,该位置的掩码值就会被缩小,而非暗区则与banding条纹mask中的掩码值保持不变。
在得到上述maskz后,可以将input1_d逐像素除以maskz,得到消除banding后的第二低频图像(为了便于说明,以下实施例中,将第二低频图像标记为input1dz)。例如:可以通过以下公式来确定input1dz:
input1dz=input1_d/maskz
input1dz=input1_d/maskz
由于maskz中的掩码值在0~1之间,基于上述公式,可以放大input1dz中banding区域的像素值。
此时,上述将所述第二低频图像和所述第一高频图像进行叠加处理,得到输出图像,可以表示为以下公式:
output=input1dz+input1_g
output=input1dz+input1_g
值得提出的是,由于input1dz中banding区域的像素值有所放大,而input1_g中的像素值与banding消除前保持一致,这样,在两者叠加后,相当于减小了output中的高频细节的占比,从而达到banding区域降噪的目的。此外,该高低频降噪的方法仅作用于非暗区的banding区域,不会影响图像中非banding区域的清晰度,从而能够实现自适应对banding区域进行噪声抑制。
需要说明的是,在实际应用中,低频图像input1_d为RAW域的单通道图像,会存在特定的Bayer格式,例如:GBRG、RGGB、BGGR等。本申请实施例提供的频闪图像处理方法以GBRG格式的RAW域图像为例进行举例说明,但是,在实施中,本申请实施例提供的频闪图像处理方法,也可以应用于RGGB、BGGR等其他格式的RAW域图像,且可以采用与上述GBRG格式的RAW域图像相似的处理过程得到目标掩码,在此不作具体限定。
在本申请实施例中,在频闪光源下拍摄到带有色带的第一图像时,可以将该图像分为高频图像和低频图像,并基于目标掩码对所述第一低频图像中的色带进行过滤处理,该过滤过程中,通过调节目标掩码的掩码值可以降低对原图本身的黑色区域和阴影区域的debanding力度,从而减少debanding过程中对原图本身的黑色区域和阴影区域造成的噪声暴露,以及减少原图画质的丢失。此外,由于debanding过程中,增大了banding区域的RAW域数值,使得第二低频图像中banding区域的RAW域数值相对增大,即高频图像中banding区域的RAW域数值相对减小,这样,在将banding区域的RAW域数值增大后的第二低频图像与banding区域的RAW域数值未改变的高频图像进行叠加处理时,就会使得高频图像在banding区域的占比降低,进而使得最终得到的输出图像既能够通过高频图像保持非banding区域的清晰度,又能够通过debanding后的低频图像保持banding区域的debanding效果,从而实现了根据banding强度改变高低频信息之间的比例,达到自适应降噪的目的。
请参阅图5,是本申请实施例提供的另一种频闪图像处理方法,如图5所示,该方法可以包括以下步骤:
步骤501、获取用户拍摄的带频闪的RAW图;
步骤502、对RAW图进行高低频分离,得到低频图像和高频图像;
步骤503、根据低频图像确定暗区掩码z;
步骤504、采用预设色带识别模型对低频图像进行识别,确定banding条纹mask;
步骤505、根据暗区掩码z调整banding条纹mask,得到目标掩码maskz;
步骤506、采用maskz对低频图像进行banding消除,得到input1dz;
步骤507、添加高频图像信息到input1dz中,得到输出图像output;
步骤508、输出output。
本申请实施例中,通过高低频图像分离,可以在去频闪后抑制banding区域的噪声,并且同时基于低频图像确定和banding区域的过程中,可以避免图像细节属性对于去频闪算法的负面影响,使得预测的频闪区域更加平滑均匀;其次是根据原始图像的暗区,输出一个平滑的暗区掩码,根据banding消除网络预测的mask区域自动控制暗区的去除程度,保留原始画质的同时降低噪声。整个过程可以在相机链路中实现,可以在用户拍摄时自动消除频闪并拍摄出清晰的照片。
本申请实施例提供的频闪图像处理方法,执行主体可以为频闪图像处理装置。本申请实施例中以频闪图像处理装置执行频闪图像处理方法为例,说明本申请实施例提供的频闪图像处理装置。
请参阅图6,本申请实施例提供的频闪图像处理装置600可以包括以下模块:
获取模块601,用于获取频闪光源下拍摄得到的第一图像,所述第一图像包括色带,所述第一图像为原始RAW域图像;
第一处理模块602,用于对所述第一图像进行高低频图像分离,得到第一高频图像和第一低频图像;
确定模块603,用于根据所述第一低频图像,确定目标掩码,其中,所述目标掩码中的掩码值与所述掩码值在所述第一低频图像中对应的区域的亮度以及色带程度负相关;
第二处理模块604,用于根据所述目标掩码对所述第一低频图像中的色带进行过滤处理,得到第二低频图像,其中,所述第二低频图像中位于第一区域内的RAW域数值比所述第一低频图像中位于第二区域内的RAW域数值
大,所述第一区域包括色带所在区域且不包括暗区所在区域,所述暗区为亮度值小于预设亮度阈值的区域,所述第一区域与所述第二区域对应;
第三处理模块605,用于将所述第二低频图像和所述第一高频图像进行叠加处理,得到输出图像。
可选的,第一处理模块602,包括:
第一处理单元,用于对所述第一图像进行均值滤波处理,得到第一低频图像;
第二处理单元,用于基于所述第一低频图像对所述第一图像进行图像去除处理,得到第一高频图像。
可选的,所述第一图像包括第一子图像和第二子图像,所述第一子图像基于第一快门频率拍摄得到,所述第二子图像基于第二快门频率拍摄得到,所述第一快门频率与拍摄对象的运动速度相关,所述第二快门频率与所述频闪光源的闪光频率相关;
第一处理模块602,包括:
第三处理单元,用于对所述第一子图像进行高低频图像分离,得到第一子高频图像和第一子低频图像;
第四处理单元,用于对所述第二子图像进行高低频图像分离,得到第二子高频图像和第二子低频图像;
其中,所述第一高频图像包括所述第一子高频图像和所述第二子高频图像,所述第一低频图像包括所述第一子低频图像和所述第二子低频图像。
可选的,确定模块603,包括:
第一确定单元,用于根据对所述第一子低频图像中的四种类型的数据在通道上进行堆积处理后的RAW图,确定第一子低频图像对应的均值RAW图,所述均值RAW图包括平均值处理后的G通道数据、平均值处理后的B通道数据和平均值处理后的R通道数据,所述G通道数据根据所述第一子低频图像中的Gr通道数据和Gb通道数据的平均值确定;
第五处理单元,用于根据所述均值RAW图的黑电平数值和最大位数,对所述均值RAW图进行归一化处理,得到归一化RAW图;
第二确定单元,用于根据所述归一化RAW图中的像素值小于或者等于第一预设阈值的区域,确定第一掩码,其中,所述第一掩码中的掩码值越小,表示对应区域的亮度越低;
第六处理单元,用于将所述第一子低频图像和所述第二子低频图像在通道上进行拼接后,得到中间图像;
第七处理单元,用于将所述中间图像输入预设色带识别模型,得到第二掩码,其中,所述第二掩码中的掩码值越小,表示对应区域的色带越重;
调整单元,用于根据所述第一掩码调整所述第二掩码,得到目标掩码。
可选的,所述调整单元,具体用于:
根据所述第一掩码减小所述第二掩码中与目标区域对应的掩码值,得到目标掩码,其中,所述第一掩码中,与所述目标区域对应的掩码值小于或者等于第二预设阈值,且所述第二掩码中,与所述目标区域对应的掩码值小于或者等于第三预设阈值。
可选的,确定模块603,包括:
第一获取单元,用于获取所述第二掩码中与Gr数据通道对应的第一掩码值;
第二获取单元,用于获取所述第二掩码中与Gb数据通道对应的第二掩码值;
更新单元,用于根据所述第一掩码值和所述第二掩码值的平均值,更新所述第二掩码中与所述Gr数据通道和所述Gr数据通道对应的掩码值;
所述调整单元,具体用于:
根据所述第一掩码调整更新后的所述第二掩码,得到目标掩码。
本申请实施例中的频闪图像处理装置可以是电子设备,也可以是电子设备中的部件,例如集成电路或芯片。该电子设备可以是终端,也可以为除终
端之外的其他设备。示例性的,电子设备可以为手机、平板电脑、笔记本电脑、掌上电脑、车载电子设备、移动上网装置(Mobile Internet Device,MID)、增强现实(augmented reality,AR)/虚拟现实(virtual reality,VR)设备、机器人、可穿戴设备、超级移动个人计算机(ultra-mobile personal computer,UMPC)、上网本或者个人数字助理(personal digital assistant,PDA)等,还可以为服务器、网络附属存储器(Network Attached Storage,NAS)、个人计算机(personal computer,PC)、电视机(television,TV)、柜员机或者自助机等,本申请实施例不作具体限定。
本申请实施例中的频闪图像处理装置可以为具有操作系统的装置。该操作系统可以为安卓(Android)操作系统,可以为ios操作系统,还可以为其他可能的操作系统,本申请实施例不作具体限定。
本申请实施例提供的频闪图像处理装置能够实现图1至图5所示方法实施例实现的各个过程,且能够取得相同的有益效果,为避免重复,这里不再赘述。
可选地,如图7所示,本申请实施例还提供一种电子设备700,包括处理器701和存储器702,存储器702上存储有可在所述处理器701上运行的程序或指令,该程序或指令被处理器701执行时实现上述频闪图像处理方法实施例的各个步骤,且能达到相同的技术效果,为避免重复,这里不再赘述。
需要说明的是,本申请实施例中的电子设备包括上述所述的移动电子设备和非移动电子设备。
图8为实现本申请实施例的一种电子设备的硬件结构示意图。
该电子设备800包括但不限于:射频单元801、网络模块802、音频输出单元803、输入单元804、传感器805、显示单元806、用户输入单元807、接口单元808、存储器809、以及处理器810等部件。
本领域技术人员可以理解,电子设备800还可以包括给各个部件供电的电源(比如电池),电源可以通过电源管理系统与处理器810逻辑相连,从而
通过电源管理系统实现管理充电、放电、以及功耗管理等功能。图8中示出的电子设备结构并不构成对电子设备的限定,电子设备可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置,在此不再赘述。
其中,输入单元804,用于获取频闪光源下拍摄得到的第一图像,所述第一图像包括色带,所述第一图像为原始RAW域图像;
处理器810,用于对所述第一图像进行高低频图像分离,得到第一高频图像和第一低频图像;
处理器810,还用于根据所述第一低频图像,确定目标掩码,其中,所述目标掩码中的掩码值与所述掩码值在所述第一低频图像中对应的区域的亮度以及色带程度负相关;
处理器810,还用于根据所述目标掩码对所述第一低频图像中的色带进行过滤处理,得到第二低频图像,其中,所述第二低频图像中位于第一区域内的RAW域数值比所述第一低频图像中位于第二区域内的RAW域数值大,所述第一区域包括色带所在区域且不包括暗区所在区域,所述暗区为亮度值小于预设亮度阈值的区域,所述第一区域与所述第二区域对应;
处理器810,还用于将所述第二低频图像和所述第一高频图像进行叠加处理,得到输出图像。
可选地,处理器810执行的所述对所述第一图像进行高低频图像分离,得到第一高频图像和第一低频图像,包括:
对所述第一图像进行均值滤波处理,得到第一低频图像;
基于所述第一低频图像对所述第一图像进行图像去除处理,得到第一高频图像。
可选地,所述第一图像包括第一子图像和第二子图像,所述第一子图像基于第一快门频率拍摄得到,所述第二子图像基于第二快门频率拍摄得到,所述第一快门频率与拍摄对象的运动速度相关,所述第二快门频率与所述频闪光源的闪光频率相关;
处理器810执行的所述对所述第一图像进行高低频图像分离,得到第一高频图像和第一低频图像,包括:
对所述第一子图像进行高低频图像分离,得到第一子高频图像和第一子低频图像;
对所述第二子图像进行高低频图像分离,得到第二子高频图像和第二子低频图像;
其中,所述第一高频图像包括所述第一子高频图像和所述第二子高频图像,所述第一低频图像包括所述第一子低频图像和所述第二子低频图像。
可选地,处理器810执行的所述根据所述第一低频图像,确定目标掩码,包括:
根据对所述第一子低频图像中的四种类型的数据在通道上进行堆积处理后的RAW图,确定第一子低频图像对应的均值RAW图,所述均值RAW图包括平均值处理后的G通道数据、平均值处理后的B通道数据和平均值处理后的R通道数据,所述G通道数据根据所述第一子低频图像中的Gr通道数据和Gb通道数据的平均值确定;
根据所述均值RAW图的黑电平数值和最大位数,对所述均值RAW图进行归一化处理,得到归一化RAW图;
根据所述归一化RAW图中的像素值小于或者等于第一预设阈值的区域,确定第一掩码,其中,所述第一掩码中的掩码值越小,表示对应区域的亮度越低;
将所述第一子低频图像和所述第二子低频图像在通道上进行拼接后,得到中间图像;
将所述中间图像输入预设色带识别模型,得到第二掩码,其中,所述第二掩码中的掩码值越小,表示对应区域的色带越重;
根据所述第一掩码调整所述第二掩码,得到目标掩码。
可选地,处理器810执行的所述根据所述第一掩码调整所述第二掩码,
得到目标掩码,包括:
根据所述第一掩码减小所述第二掩码中与目标区域对应的掩码值,得到目标掩码,其中,所述第一掩码中,与所述目标区域对应的掩码值小于或者等于第二预设阈值,且所述第二掩码中,与所述目标区域对应的掩码值小于或者等于第三预设阈值。
可选地,处理器810执行的所述根据所述第一低频图像,确定目标掩码,还包括:
获取所述第二掩码中与Gr数据通道对应的第一掩码值;
获取所述第二掩码中与Gb数据通道对应的第二掩码值;
根据所述第一掩码值和所述第二掩码值的平均值,更新所述第二掩码中与所述Gr数据通道和所述Gr数据通道对应的掩码值;
所述根据所述第一掩码调整所述第二掩码,得到目标掩码,包括:
根据所述第一掩码调整更新后的所述第二掩码,得到目标掩码。
本申请实施例提供的电子设备800能够实现如图6所示频闪图像处理装置中各模块执行的过程,且能够取得相同的有益效果,为避免重复,在此不再赘述。
应理解的是,本申请实施例中,输入单元804可以包括图形处理器(Graphics Processing Unit,GPU)8041和麦克风8042,图形处理器8041对在视频捕获模式或图像捕获模式中由图像捕获装置(如摄像头)获得的静态图片或视频的图像数据进行处理。显示单元806可包括显示面板8061,可以采用液晶显示器、有机发光二极管等形式来配置显示面板8061。用户输入单元807包括触控面板8071以及其他输入设备8072中的至少一种。触控面板8071,也称为触摸屏。触控面板8071可包括触摸检测装置和触摸控制器两个部分。其他输入设备8072可以包括但不限于物理键盘、功能键(比如音量控制按键、开关按键等)、轨迹球、鼠标、操作杆,在此不再赘述。
存储器809可用于存储软件程序以及各种数据。存储器809可主要包括
存储程序或指令的第一存储区和存储数据的第二存储区,其中,第一存储区可存储操作系统、至少一个功能所需的应用程序或指令(比如声音播放功能、图像播放功能等)等。此外,存储器809可以包括易失性存储器或非易失性存储器,或者,存储器809可以包括易失性和非易失性存储器两者。其中,非易失性存储器可以是只读存储器(Read-Only Memory,ROM)、可编程只读存储器(Programmable ROM,PROM)、可擦除可编程只读存储器(Erasable PROM,EPROM)、电可擦除可编程只读存储器(Electrically EPROM,EEPROM)或闪存。易失性存储器可以是随机存取存储器(Random Access Memory,RAM),静态随机存取存储器(Static RAM,SRAM)、动态随机存取存储器(Dynamic RAM,DRAM)、同步动态随机存取存储器(Synchronous DRAM,SDRAM)、双倍数据速率同步动态随机存取存储器(Double Data Rate SDRAM,DDRSDRAM)、增强型同步动态随机存取存储器(Enhanced SDRAM,ESDRAM)、同步连接动态随机存取存储器(Synch link DRAM,SLDRAM)和直接内存总线随机存取存储器(Direct Rambus RAM,DRRAM)。本申请实施例中的存储器809包括但不限于这些和任意其它适合类型的存储器。
处理器810可包括一个或多个处理单元;可选的,处理器810集成应用处理器和调制解调处理器,其中,应用处理器主要处理涉及操作系统、用户界面和应用程序等的操作,调制解调处理器主要处理无线通信信号,如基带处理器。可以理解的是,上述调制解调处理器也可以不集成到处理器810中。
本申请实施例还提供一种可读存储介质,所述可读存储介质上存储有程序或指令,该程序或指令被处理器执行时实现上述频闪图像处理方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
其中,所述处理器为上述实施例中所述的电子设备中的处理器。所述可读存储介质,包括计算机可读存储介质,如计算机只读存储器ROM、随机存取存储器RAM、磁碟或者光盘等。
本申请实施例另提供了一种芯片,所述芯片包括处理器和通信接口,所
述通信接口和所述处理器耦合,所述处理器用于运行程序或指令,实现上述频闪图像处理方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
应理解,本申请实施例提到的芯片还可以称为系统级芯片、系统芯片、芯片系统或片上系统芯片等。
本申请实施例提供一种计算机程序产品,该程序产品被存储在存储介质中,该程序产品被至少一个处理器执行以实现如上述频闪图像处理方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素。此外,需要指出的是,本申请实施方式中的方法和装置的范围不限按示出或讨论的顺序来执行功能,还可包括根据所涉及的功能按基本同时的方式或按相反的顺序来执行功能,例如,可以按不同于所描述的次序来执行所描述的方法,并且还可以添加、省去、或组合各种步骤。另外,参照某些示例所描述的特征可在其他示例中被组合。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以计算机软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台电子设备(可以是手机,计算机,服务器,或者网络设备等)执行本申请各个实施例所述的方法。
上面结合附图对本申请的实施例进行了描述,但是本申请并不局限于上述的具体实施方式,上述的具体实施方式仅仅是示意性的,而不是限制性的,本领域的普通技术人员在本申请的启示下,在不脱离本申请宗旨和权利要求所保护的范围情况下,还可做出很多形式,均属于本申请的保护之内。
Claims (17)
- 一种频闪图像处理方法,包括:获取频闪光源下拍摄得到的第一图像,所述第一图像包括色带,所述第一图像为原始RAW域图像;对所述第一图像进行高低频图像分离,得到第一高频图像和第一低频图像;根据所述第一低频图像,确定目标掩码,其中,所述目标掩码中的掩码值与所述掩码值在所述第一低频图像中对应的区域的亮度以及色带程度负相关;根据所述目标掩码对所述第一低频图像中的色带进行过滤处理,得到第二低频图像,其中,所述第二低频图像中位于第一区域内的RAW域数值比所述第一低频图像中位于第二区域内的RAW域数值大,所述第一区域包括色带所在区域且不包括暗区所在区域,所述暗区为亮度值小于预设亮度阈值的区域,所述第一区域与所述第二区域对应;将所述第二低频图像和所述第一高频图像进行叠加处理,得到输出图像。
- 根据权利要求1所述的方法,其中,所述对所述第一图像进行高低频图像分离,得到第一高频图像和第一低频图像,包括:对所述第一图像进行均值滤波处理,得到第一低频图像;基于所述第一低频图像对所述第一图像进行图像去除处理,得到第一高频图像。
- 根据权利要求1或2所述的方法,其中,所述第一图像包括第一子图像和第二子图像,所述第一子图像基于第一快门频率拍摄得到,所述第二子图像基于第二快门频率拍摄得到,所述第一快门频率与拍摄对象的运动速度相关,所述第二快门频率与所述频闪光源的闪光频率相关;所述对所述第一图像进行高低频图像分离,得到第一高频图像和第一低 频图像,包括:对所述第一子图像进行高低频图像分离,得到第一子高频图像和第一子低频图像;对所述第二子图像进行高低频图像分离,得到第二子高频图像和第二子低频图像;其中,所述第一高频图像包括所述第一子高频图像和所述第二子高频图像,所述第一低频图像包括所述第一子低频图像和所述第二子低频图像。
- 根据权利要求3所述的方法,其中,所述根据所述第一低频图像,确定目标掩码,包括:根据对所述第一子低频图像中的四种类型的数据在通道上进行堆积处理后的RAW图,确定第一子低频图像对应的均值RAW图,所述均值RAW图包括平均值处理后的G通道数据、平均值处理后的B通道数据和平均值处理后的R通道数据,所述G通道数据根据所述第一子低频图像中的Gr通道数据和Gb通道数据的平均值确定;根据所述均值RAW图的黑电平数值和最大位数,对所述均值RAW图进行归一化处理,得到归一化RAW图;根据所述归一化RAW图中的像素值小于或者等于第一预设阈值的区域,确定第一掩码,其中,所述第一掩码中的掩码值越小,表示对应区域的亮度越低;将所述第一子低频图像和所述第二子低频图像在通道上进行拼接后,得到中间图像;将所述中间图像输入预设色带识别模型,得到第二掩码,其中,所述第二掩码中的掩码值越小,表示对应区域的色带越重;根据所述第一掩码调整所述第二掩码,得到目标掩码。
- 根据权利要求4所述的方法,其中,所述根据所述第一掩码调整所述第二掩码,得到目标掩码,包括:根据所述第一掩码减小所述第二掩码中与目标区域对应的掩码值,得到 目标掩码,其中,所述第一掩码中,与所述目标区域对应的掩码值小于或者等于第二预设阈值,且所述第二掩码中,与所述目标区域对应的掩码值小于或者等于第三预设阈值。
- 根据权利要求4所述的方法,其中,所述根据所述第一低频图像,确定目标掩码,还包括:获取所述第二掩码中与Gr数据通道对应的第一掩码值;获取所述第二掩码中与Gb数据通道对应的第二掩码值;根据所述第一掩码值和所述第二掩码值的平均值,更新所述第二掩码中与所述Gr数据通道和所述Gr数据通道对应的掩码值;所述根据所述第一掩码调整所述第二掩码,得到目标掩码,包括:根据所述第一掩码调整更新后的所述第二掩码,得到目标掩码。
- 一种频闪图像处理装置,包括:获取模块,用于获取频闪光源下拍摄得到的第一图像,所述第一图像包括色带,所述第一图像为原始RAW域图像;第一处理模块,用于对所述第一图像进行高低频图像分离,得到第一高频图像和第一低频图像;确定模块,用于根据所述第一低频图像,确定目标掩码,其中,所述目标掩码中的掩码值与所述掩码值在所述第一低频图像中对应的区域的亮度以及色带程度负相关;第二处理模块,用于根据所述目标掩码对所述第一低频图像中的色带进行过滤处理,得到第二低频图像,其中,所述第二低频图像中位于第一区域内的RAW域数值比所述第一低频图像中位于第二区域内的RAW域数值大,所述第一区域包括色带所在区域且不包括暗区所在区域,所述暗区为亮度值小于预设亮度阈值的区域,所述第一区域与所述第二区域对应;第三处理模块,用于将所述第二低频图像和所述第一高频图像进行叠加处理,得到输出图像。
- 根据权利要求7所述的装置,其中,所述第一处理模块,包括:第一处理单元,用于对所述第一图像进行均值滤波处理,得到第一低频图像;第二处理单元,用于基于所述第一低频图像对所述第一图像进行图像去除处理,得到第一高频图像。
- 根据权利要求7或8所述的装置,其中,所述第一图像包括第一子图像和第二子图像,所述第一子图像基于第一快门频率拍摄得到,所述第二子图像基于第二快门频率拍摄得到,所述第一快门频率与拍摄对象的运动速度相关,所述第二快门频率与所述频闪光源的闪光频率相关;所述第一处理模块,包括:第三处理单元,用于对所述第一子图像进行高低频图像分离,得到第一子高频图像和第一子低频图像;第四处理单元,用于对所述第二子图像进行高低频图像分离,得到第二子高频图像和第二子低频图像;其中,所述第一高频图像包括所述第一子高频图像和所述第二子高频图像,所述第一低频图像包括所述第一子低频图像和所述第二子低频图像。
- 根据权利要求9所述的装置,其中,所述确定模块,包括:第一确定单元,用于根据对所述第一子低频图像中的四种类型的数据在通道上进行堆积处理后的RAW图,确定第一子低频图像对应的均值RAW图,所述均值RAW图包括平均值处理后的G通道数据、平均值处理后的B通道数据和平均值处理后的R通道数据,所述G通道数据根据所述第一子低频图像中的Gr通道数据和Gb通道数据的平均值确定;第五处理单元,用于根据所述均值RAW图的黑电平数值和最大位数,对所述均值RAW图进行归一化处理,得到归一化RAW图;第二确定单元,用于根据所述归一化RAW图中的像素值小于或者等于第一预设阈值的区域,确定第一掩码,其中,所述第一掩码中的掩码值越小,表示对应区域的亮度越低;第六处理单元,用于将所述第一子低频图像和所述第二子低频图像在通 道上进行拼接后,得到中间图像;第七处理单元,用于将所述中间图像输入预设色带识别模型,得到第二掩码,其中,所述第二掩码中的掩码值越小,表示对应区域的色带越重;调整单元,用于根据所述第一掩码调整所述第二掩码,得到目标掩码。
- 根据权利要求10所述的装置,其中,所述调整单元,具体用于:根据所述第一掩码减小所述第二掩码中与目标区域对应的掩码值,得到目标掩码,其中,所述第一掩码中,与所述目标区域对应的掩码值小于或者等于第二预设阈值,且所述第二掩码中,与所述目标区域对应的掩码值小于或者等于第三预设阈值。
- 根据权利要求10所述的装置,其中,所述确定模块,包括:第一获取单元,用于获取所述第二掩码中与Gr数据通道对应的第一掩码值;第二获取单元,用于获取所述第二掩码中与Gb数据通道对应的第二掩码值;更新单元,用于根据所述第一掩码值和所述第二掩码值的平均值,更新所述第二掩码中与所述Gr数据通道和所述Gr数据通道对应的掩码值;所述调整单元,具体用于:根据所述第一掩码调整更新后的所述第二掩码,得到目标掩码。
- 一种电子设备,包括处理器和存储器,所述存储器存储可在所述处理器上运行的程序或指令,所述程序或指令被所述处理器执行时实现如权利要求1至6中任一项所述的频闪图像处理方法的步骤。
- 一种可读存储介质,所述可读存储介质上存储程序或指令,所述程序或指令被处理器执行时实现如权利要求1至6中任一项所述的频闪图像处理方法的步骤。
- 一种芯片,所述芯片包括处理器和通信接口,所述通信接口和所述处理器耦合,所述处理器用于运行程序或指令,实现如权利要求1至6中任一项所述的频闪图像处理方法的步骤。
- 一种计算机程序产品,所述程序产品被存储在非瞬态存储介质中,所述程序产品被至少一个处理器执行以实现如权利要求1至6中任一项所述的频闪图像处理方法的步骤。
- 一种电子设备,所述电子设备用于执行如权利要求1-6中任一项所述的频闪图像处理方法的步骤。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210796573.4A CN115082350A (zh) | 2022-07-06 | 2022-07-06 | 频闪图像处理方法、装置、电子设备和可读存储介质 |
CN202210796573.4 | 2022-07-06 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2024007948A1 true WO2024007948A1 (zh) | 2024-01-11 |
Family
ID=83257586
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2023/103904 WO2024007948A1 (zh) | 2022-07-06 | 2023-06-29 | 频闪图像处理方法、装置、电子设备和可读存储介质 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN115082350A (zh) |
WO (1) | WO2024007948A1 (zh) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115082350A (zh) * | 2022-07-06 | 2022-09-20 | 维沃移动通信有限公司 | 频闪图像处理方法、装置、电子设备和可读存储介质 |
CN116112768A (zh) * | 2023-01-04 | 2023-05-12 | 歌尔股份有限公司 | 一种抗光干扰摄像装置及电子设备 |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1946143A (zh) * | 2006-11-07 | 2007-04-11 | 北京中星微电子有限公司 | 消除曝光闪烁的方法及装置 |
CN111383196A (zh) * | 2020-03-13 | 2020-07-07 | 浙江大华技术股份有限公司 | 红外图像条纹消除方法、红外探测器及存储装置 |
CN112738414A (zh) * | 2021-04-06 | 2021-04-30 | 荣耀终端有限公司 | 一种拍照方法、电子设备及存储介质 |
US20210400171A1 (en) * | 2018-11-09 | 2021-12-23 | Zhejiang Uniview Technologies Co., Ltd. | Method and apparatus for automatically detecting and suppressing fringes, electronic device and computer-readable storage medium |
CN113840090A (zh) * | 2021-09-27 | 2021-12-24 | 维沃移动通信有限公司 | 快门调整方法、装置、设备和存储介质 |
CN115082350A (zh) * | 2022-07-06 | 2022-09-20 | 维沃移动通信有限公司 | 频闪图像处理方法、装置、电子设备和可读存储介质 |
-
2022
- 2022-07-06 CN CN202210796573.4A patent/CN115082350A/zh active Pending
-
2023
- 2023-06-29 WO PCT/CN2023/103904 patent/WO2024007948A1/zh unknown
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1946143A (zh) * | 2006-11-07 | 2007-04-11 | 北京中星微电子有限公司 | 消除曝光闪烁的方法及装置 |
US20210400171A1 (en) * | 2018-11-09 | 2021-12-23 | Zhejiang Uniview Technologies Co., Ltd. | Method and apparatus for automatically detecting and suppressing fringes, electronic device and computer-readable storage medium |
CN111383196A (zh) * | 2020-03-13 | 2020-07-07 | 浙江大华技术股份有限公司 | 红外图像条纹消除方法、红外探测器及存储装置 |
CN112738414A (zh) * | 2021-04-06 | 2021-04-30 | 荣耀终端有限公司 | 一种拍照方法、电子设备及存储介质 |
CN113840090A (zh) * | 2021-09-27 | 2021-12-24 | 维沃移动通信有限公司 | 快门调整方法、装置、设备和存储介质 |
CN115082350A (zh) * | 2022-07-06 | 2022-09-20 | 维沃移动通信有限公司 | 频闪图像处理方法、装置、电子设备和可读存储介质 |
Also Published As
Publication number | Publication date |
---|---|
CN115082350A (zh) | 2022-09-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2024007948A1 (zh) | 频闪图像处理方法、装置、电子设备和可读存储介质 | |
WO2020171373A1 (en) | Techniques for convolutional neural network-based multi-exposure fusion of multiple image frames and for deblurring multiple image frames | |
JP7226851B2 (ja) | 画像処理の方法および装置並びにデバイス | |
JP7266672B2 (ja) | 画像処理方法および画像処理装置、ならびにデバイス | |
WO2020034896A1 (en) | Method and apparatus for image processing, and mobile terminal | |
US9813635B2 (en) | Method and apparatus for auto exposure value detection for high dynamic range imaging | |
WO2019183813A1 (zh) | 一种拍摄方法及设备 | |
KR102149187B1 (ko) | 전자 장치와, 그의 제어 방법 | |
US10074165B2 (en) | Image composition device, image composition method, and recording medium | |
CN111028189A (zh) | 图像处理方法、装置、存储介质及电子设备 | |
WO2020171305A1 (en) | Apparatus and method for capturing and blending multiple images for high-quality flash photography using mobile electronic device | |
WO2009138018A1 (zh) | 图像处理方法及装置 | |
US20210021750A1 (en) | Method and Device for Balancing Foreground-Background Luminosity | |
US10769416B2 (en) | Image processing method, electronic device and storage medium | |
EP4340383A1 (en) | Image processing method and related device thereof | |
WO2024027583A1 (zh) | 图像处理方法、装置、电子设备和可读存储介质 | |
CN110971841A (zh) | 图像处理方法、装置、存储介质及电子设备 | |
CN111901519B (zh) | 屏幕补光方法、装置及电子设备 | |
CN111327827A (zh) | 拍摄场景识别控制方法、装置及拍摄设备 | |
US9930266B2 (en) | Methods for generating HDR (high dynamic range) images and apparatuses using the same | |
CN108495038B (zh) | 图像处理方法、装置、存储介质及电子设备 | |
CN105163040A (zh) | 一种图像处理方法及移动终端 | |
CN113989895A (zh) | 一种人脸皮肤分割方法、电子设备及存储介质 | |
US20160292825A1 (en) | System and method to refine image data | |
CN113422893A (zh) | 图像采集方法、装置、存储介质及移动终端 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 23834715 Country of ref document: EP Kind code of ref document: A1 |