WO2020140986A1 - Procédé et appareil de débruitage d'image, support d'enregistrement et terminal - Google Patents

Procédé et appareil de débruitage d'image, support d'enregistrement et terminal Download PDF

Info

Publication number
WO2020140986A1
WO2020140986A1 PCT/CN2020/070337 CN2020070337W WO2020140986A1 WO 2020140986 A1 WO2020140986 A1 WO 2020140986A1 CN 2020070337 W CN2020070337 W CN 2020070337W WO 2020140986 A1 WO2020140986 A1 WO 2020140986A1
Authority
WO
WIPO (PCT)
Prior art keywords
color
target
noise reduction
area
brightness
Prior art date
Application number
PCT/CN2020/070337
Other languages
English (en)
Chinese (zh)
Inventor
张弓
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Publication of WO2020140986A1 publication Critical patent/WO2020140986A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • G06T5/70
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/71Circuitry for evaluating the brightness variation

Definitions

  • Embodiments of the present application relate to the technical field of terminals, and in particular, to an image noise reduction method, device, storage medium, and terminal.
  • Embodiments of the present application provide an image noise reduction method, device, storage medium, and terminal, which can optimize a noise reduction scheme in related technologies.
  • an embodiment of the present application provides an image noise reduction method, including:
  • an embodiment of the present application further provides an image noise reduction device, which includes:
  • the information determination module is used to determine the brightness information of the skin color area in the target image
  • a noise reduction intensity determination module configured to determine a target sub-region with a brightness lower than a preset brightness threshold in the skin-color region, and determine a target noise reduction intensity according to the brightness of the target sub-region and the brightness information;
  • the noise reduction processing module is configured to perform noise reduction processing on the target sub-region based on the target noise reduction intensity to obtain a target image after noise reduction processing.
  • embodiments of the present application also provide embodiments of the present application to provide a computer-readable storage medium on which a computer program is stored, which when executed by a processor implements image reduction as provided by any embodiment of the present application Noise method.
  • an embodiment of the present application provides a terminal, including a memory, a processor, and a computer program stored on the memory and executable by the processor.
  • the processor executes the computer program, it can be implemented as arbitrarily implemented in the present application.
  • Example provides image noise reduction method.
  • An embodiment of the present application provides an image noise reduction solution, by determining the brightness information of the skin color area in the target image; determining the target sub-area in the skin color area whose brightness is lower than the preset brightness threshold, according to the brightness and brightness information of the target sub-area Determine the target noise reduction intensity; perform noise reduction on the target sub-region based on the target noise reduction intensity to obtain the target image after noise reduction processing.
  • the target sub-region to be subjected to noise reduction processing is determined based on the brightness
  • the noise reduction intensity is determined according to the brightness of each pixel in the target sub-region and the brightness information of the skin color area
  • the corresponding pixels are determined based on the noise reduction intensity
  • FIG. 2 is a flowchart of another image noise reduction method provided by an embodiment of the present application.
  • FIG. 3 is a schematic diagram of an image noise distribution provided by an embodiment of the present application.
  • FIG. 5 is a structural block diagram of an image noise reduction device according to an embodiment of the present application.
  • FIG. 6 is a schematic structural diagram of a terminal according to an embodiment of the present application.
  • FIG. 7 is a structural block diagram of a smartphone provided by an embodiment of the present application.
  • the noise reduction scheme in the related art regards the human face as a whole, and uses a similar noise reduction intensity to perform overall noise reduction on the human face. Since the actual distribution of noise in the face image is not consistent, the overall noise reduction scheme will result in uneven distribution of noise in different areas, and the noise reduction effect is not ideal.
  • FIG. 1 is a flowchart of an image noise reduction method provided by an embodiment of the present application.
  • the method may be suitable for photographing scenes, including but not limited to shooting videos or photos.
  • the method may be performed by an image noise reduction device, which may be The software and/or hardware implementation can generally be integrated in the terminal. As shown in Figure 1, the method includes:
  • Step 110 Determine the brightness information of the skin color region in the target image.
  • the terminal in the embodiment of the present application may include mobile phones, tablet computers, notebook computers, computers, and other electronic devices that display images.
  • An operating system is integrated in the terminal in the embodiment of the present application, and the type of the operating system is not limited in the embodiment of the present application, for example, it may include an Android operating system, a Windows operating system, and an Apple (ios) operating system and many more.
  • the brightness information may be brightness related information of each pixel in the skin color area of the target image.
  • the brightness information may be the average brightness value of the skin area; the brightness information may be the brightness weighted value of the skin area; the brightness information may also be the maximum brightness of the skin area; and the brightness information may also be the minimum brightness of the skin area, etc. .
  • the target image may be an image obtained by shooting a target scene through a terminal with a shooting function, an image acquired from an album of the terminal, or an image acquired from an Internet platform, and so on.
  • the target image may be an image of RGB color mode, YUV color mode, HSV color mode, or Lab color mode.
  • color is usually described by three relatively independent attributes. The combined effect of three independent variables naturally forms a spatial coordinate. This is the color mode.
  • the color mode can be divided into a primary color mode and a color separation color mode.
  • the primary color mode includes but not limited to RGB color mode
  • the color separation mode includes but not limited to YUV color mode, Lab color mode and HSV color mode.
  • the color light separation color mode is a color mode for indicating the separation of color and brightness.
  • the Y component represents brightness
  • the U component represents color
  • the V component represents density, where the U component and V component together represent the color of the image.
  • the L component represents brightness
  • a and b jointly represent color.
  • the H component represents hue
  • the S component represents saturation
  • the V component represents lightness, where hue is the basic attribute of color
  • saturation refers to the purity of color
  • lightness is also brightness.
  • the brightness component and the color component can be extracted separately, and the image can be processed in any aspect of brightness and color. Exemplarily, when the brightness is processed, the color of the image will not be processed. The weight has no effect.
  • the brightness of each pixel in the skin color area is obtained, so that the mean value of the brightness mean_lux of the skin color area can be calculated.
  • the maximum brightness max_lux and the minimum brightness min_lux can be determined by comparing the brightness of each pixel in the skin color area.
  • Step 120 Determine a target sub-region whose brightness in the skin-color region is lower than a preset brightness threshold, and determine a target noise reduction intensity according to the brightness of the target sub-region and the brightness information.
  • each pixel in the skin color area of the target image in the color separation mode is obtained, the brightness of each pixel in the skin color area is compared with a preset brightness threshold, and a target whose brightness is lower than the preset brightness threshold is marked Pixel points, clustering the target pixel points into at least one target sub-region; according to the brightness of the target pixel point in each of the target sub-regions, the average brightness value of the skin color area, the maximum brightness value of the skin color area and the The minimum brightness value of the skin color area determines the target noise reduction intensity based on the brightness.
  • the preset brightness threshold is a threshold value used to filter the target sub-regions in the skin-color area that needs to be subjected to local noise reduction, including but not limited to the average brightness of the skin-color area and the brightness weighted value of the skin-color area (weighted weight Is the ratio of the number of pixels corresponding to each brightness to the total number of pixels contained in the skin color area), the target brightness value in the skin color area where the brightness is lower than the average brightness and the corresponding pixel is the most, or the brightness in the skin area is lower than the brightness weight Value and the corresponding target brightness value with the most pixels.
  • weighted weight the ratio of the number of pixels corresponding to each brightness to the total number of pixels contained in the skin color area
  • the brightness of each pixel in the skin color area is compared with the average brightness to determine the target sub-area with brightness lower than the average brightness. For the area composed of pixels whose brightness is higher than the average brightness value, no additional local noise reduction process is performed.
  • the brightness of each pixel in the skin color area is compared with the brightness weighted value to determine the target sub-area whose brightness is lower than the brightness weighted value. For the area composed of pixels whose brightness is higher than the brightness weighted value, no additional local noise reduction process is performed.
  • Step 130 Perform noise reduction processing on the target sub-region based on the target noise reduction intensity to obtain a target image after noise reduction processing.
  • any pixel in the target sub-region is adjusted according to the target noise reduction intensity, and the remaining pixel points in the target sub-region are separately adjusted according to the target noise reduction intensity, Therefore, the local noise reduction processing of the target image is realized, and the amount of face noise under undesirable light such as backlight, side light, direct light source, and dark light is effectively suppressed.
  • the image noise reduction method provided in the embodiment of the present application can make the face area present a natural and clear picture effect, avoid the appearance of the traditional face noise reduction scheme, regard the face as a whole, and perform overall noise reduction processing on the entire face area, The phenomenon of uneven noise in different areas of the processed image will affect the consistency of the photos.
  • the embodiments of the present application perform noise reduction processing on the face area on the basis of a single frame image, which has a high processing speed, which can avoid the multi-frame noise reduction scheme taking a long time and affecting the success rate and photo output of the photo.
  • the problem of film speed is a problem of film speed.
  • the target noise reduction is determined according to the brightness and brightness information of the target sub-region Intensity; based on the target noise reduction intensity, the target sub-region is subjected to noise reduction processing to obtain the noise-reduced target image.
  • the target sub-region to be subjected to noise reduction processing is determined based on the brightness
  • the noise reduction intensity is determined according to the brightness of each pixel in the target sub-region and the brightness information of the skin color area
  • the corresponding pixels are determined based on the noise reduction intensity
  • FIG. 2 is a flowchart of another image noise reduction method provided by an embodiment of the present application. The method includes:
  • Step 201 Acquire a target image in a color-separated color mode, perform face recognition on the target image, and determine face information contained in the target image.
  • a face frame can be used to identify the face area.
  • the target image in the color separation mode can be an image captured by the camera according to the shooting instruction, or it can be collected by the camera and displayed on the electronic device screen for the user before the shooting instruction is executed. Preview image information.
  • a setting algorithm may be used to convert the image to a color separation mode.
  • a method of generating an image in the YUV color mode includes: converting the raw data into an RGB color mode image based on the raw data acquired by the image sensor; The RGB color mode image described above generates the YUV color mode image.
  • the image acquisition device may be, for example, a camera, and the camera may include a charge-coupled device (CCD, Charge-coupled Device) image sensor or a complementary metal oxide semiconductor (CMOS, Complementary Metal, Oxide, Semiconductor) image sensor, based on the CCD image sensor or
  • CMOS complementary metal oxide semiconductor
  • the CMOS image sensor converts the captured light source signal into RAW raw data of a digital signal, converts the RGB raw color image data based on the RAW raw data, and further converts it into YUV color mode image data.
  • the JPG format image can be formed by the YUV color mode image.
  • the color in the RGB color mode image data formed by the conversion of RAW raw data is not the true color of the image, and the RGB color mode image data formed here cannot be processed in any way.
  • the YUV color mode image data The formed color is the true color of the image, and the image data of the YUV color mode can be processed.
  • RGB data is usually processed, and the raw data collected by the image sensor is converted into the following color mode during the processing process: RAW raw data-RGB color mode image-YUV color mode image Among them, the RGB color mode image is processed, and then the processed RGB color mode image is converted to the YUV color mode image, and the JPG format image can be output.
  • processing images in other color modes they need to be obtained by converting images in YUV color mode, and converting the processed images to images in YUV color mode to obtain images in JPG format.
  • Step 202 Determine the facial skin area of the target image according to the contour information in the human face information.
  • the facial skin area of the target image may be determined according to the contour information in the human face information.
  • the facial skin area on the human face is determined according to the position coordinates of the facial features.
  • face recognition technology and key points are used to identify the number, size, and pose of faces in the image, and also recognize the face Regions, facial features position, so as to segment the facial skin area in the facial image.
  • a face recognition technology may also be used to identify the position of the nose head, a face frame with a fixed length and width is taken based on the nose head coordinates, and a facial skin area is determined based on the area selected by the face frame.
  • Step 203 Obtain the brightness and color of each pixel in the facial skin area separately, and determine the investigation area from the target image according to the brightness and color.
  • the average brightness value of the face skin area is calculated according to the brightness of pixels included in the face skin area.
  • the color average value of the facial skin area is calculated according to the color of the pixels included in the facial skin area.
  • the similar target image area is expanded to obtain areas with similar brightness and color like facial skin, such as neck, ears, and shoulders.
  • the face frame as a reference, increase the area of the face frame according to a set ratio. Obtain the brightness and color of each new pixel in the new area of the face frame before and after the change. Determine the brightness deviation of the brightness of each newly added pixel from the above average brightness value, and the color deviation of the color of each newly added pixel from the above average color value. When the brightness deviation is less than the set brightness threshold and the color deviation is less than the set color threshold, it is determined that the newly added pixel belongs to the investigation area.
  • a new face frame can be obtained by extending 10% along the long and short sides of the face frame away from the centroid.
  • the extension of 5% along a certain direction may be determined according to the difference between the brightness of the pixel and the average brightness of the facial skin area and the difference between the color and the average color.
  • Step 204 a skin color area is formed by the investigation area and the facial skin area.
  • Step 205 Perform noise statistics on the target image, and determine the noise levels of the skin color area and the background area based on the statistical results.
  • the remaining area in the target image except the skin color area is recorded as the background area, including hair, clothes, and accessories.
  • Perform noise statistics on the target image and determine the noise levels of the skin color area and the background area based on the statistical results. For example, an image noise estimation algorithm is used to estimate the noise of the target image.
  • the target image is divided into blocks, based on the domain correlation (the difference between all pixels in an image block area and its neighbors is calculated by calculating the difference between the pixels in the image block to reflect the correlation between the pixels in the block, referred to as the block Intra-domain correlation, use the intra-block domain correlation to determine the smoothness of the image block) to filter the smooth image blocks; then use the SVD (K-SVD algorithm, K-singular value decomposition) for the filtered smooth image blocks (Algorithm) to perform noise estimation, and finally compare the noise estimation values of each smooth image block to determine the maximum noise estimation value and the minimum noise estimation value.
  • the domain correlation the difference between all pixels in an image block area and its neighbors is calculated by calculating the difference between the pixels in the image block to reflect the correlation between the pixels in the block, referred to as the block Intra-domain correlation, use the intra-block domain correlation to determine the smoothness of the image block
  • the SVD K-SVD algorithm, K-singular value decomposition
  • the noise interval formed by the maximum noise estimate value and the minimum noise estimate value is divided into N noise levels, and N is a positive integer, which is the system default value. The higher the noise level, the greater the noise.
  • a similar method is used to estimate the noise of the skin color area and the background area respectively to obtain the skin color noise value and the background noise value.
  • the skin color noise value and the background noise value are respectively matched with the noise level, and the noise levels of the skin color area and the background area are determined respectively. Based on the above noise estimation result, the overall noise reduction intensity of the overall noise reduction processing of the target image is determined, and the overall noise reduction processing of the target image is performed according to the overall noise reduction intensity.
  • FIG. 3 is a schematic diagram of an image noise distribution provided by an embodiment of the present application. As shown in FIG. 3, black areas indicate non-noise areas and gray areas indicate noise areas. As shown in FIG. 3, noise is concentrated on the hair edge 310 and eyebrows. Edge 320, facial contour 330 and neck shadow 340 and other areas.
  • the noise level of the skin area is higher than the noise level of the background area, that is, the noise of the skin area is greater than the noise of the background area
  • a local noise reduction event is triggered To perform additional noise reduction on the skin color area. If the noise of the skin color area is smaller than the noise of the background area, then the details of the face area are protected.
  • Step 206 Determine an overall noise reduction intensity for performing overall noise reduction processing on the target image according to the noise level, and perform overall noise reduction processing on the target image based on the overall noise reduction intensity.
  • facial features such as eyes, eyebrows, and mouth contain more image detail information, so it is not appropriate to reduce noise.
  • Step 207 Determine whether the noise level of the skin color area is higher than the noise level of the background area. If yes, perform step 208; otherwise, perform step 219.
  • Step 208 Trigger the local noise reduction event.
  • a local noise reduction event is triggered.
  • the local noise reduction event is used to instruct to perform the operation of determining the brightness information of the skin color region in the target image.
  • Step 209 It is detected that a local noise reduction event is triggered.
  • Step 210 Determine the average brightness value, the maximum brightness value, and the minimum brightness value of the skin color region in the target image.
  • Step 211 Compare the brightness of each pixel in the skin color area with the average brightness, mark target pixels whose brightness is lower than the average brightness, and cluster the target pixels into at least one target sub-area.
  • the brightness of each pixel in the skin color area is compared with the average brightness of the skin color area, the target pixel with brightness lower than the average brightness is marked, and the target pixel is clustered into at least one target sub-area .
  • a target pixel in each target sub-region is sequentially obtained, and the target noise reduction intensity based on the brightness is calculated according to the brightness in_lux of the target pixel, the average brightness of the skin area, the maximum brightness, and the minimum brightness.
  • the following formula can be used to calculate the target noise reduction intensity L_nr based on brightness:
  • max_lux represents the maximum brightness of the skin area
  • min_lux represents the minimum brightness of the skin area
  • mean_lux represents the average brightness of the skin area
  • in_lux represents the brightness of a target pixel in the target sub-region.
  • the corresponding target pixel may be subjected to noise reduction processing according to the brightness-based target noise reduction intensity.
  • the target noise reduction intensity corresponding to the target pixel in each target sub-region can be calculated in the above manner.
  • the target sub-region may be a region with low brightness such as a shadow on the neck or a contour of the face.
  • Step 212 Determine a target noise reduction intensity based on the brightness according to the brightness of the target pixel in each target sub-region, the average brightness, the maximum brightness, and the minimum brightness.
  • Step 213 Determine whether the number of faces in the target image is greater than 1, if yes, perform step 214, otherwise perform step 218.
  • Step 214 When the target image contains at least two human faces, obtain the color of the skin color area, and determine the first color mean, maximum color value, and minimum color value of the skin color area in the target image according to the color of the skin color area.
  • the color components of the pixels included in each face are respectively obtained, and the colors of the pixels in the skin color area are calculated based on the weighted summation.
  • the color C of each pixel can be expressed as:
  • (m,n) represents a pixel of the facial skin color area of each face, which belongs to the coordinate range (0,0) to (x,y), and is the set weight, which can be the system default value, U mn and V mn respectively represent the color component of each pixel in the facial skin color area of each face.
  • the first color mean, maximum color value and minimum color value of the skin color area in the target image are determined according to the color of each pixel in the facial skin color area of each face.
  • Step 215 Calculate the second color average of the skin color area corresponding to each face separately.
  • the color average value of the skin color area of each face is determined based on the color of each pixel in the facial skin color area of each face, and is recorded as the second color average.
  • Step 216 For the target skin color area where the second color average value is less than the first color average value, determine based on the color according to the color of the target skin color area, the first color average value, the maximum color value and the minimum color value Target noise reduction intensity.
  • the second color mean is compared with the first color mean to determine the target skin color area where the second color mean is less than the first color mean.
  • the following formula can be used to calculate the color-based target noise reduction intensity C_nr:
  • the color-based target noise reduction intensity can also be calculated according to the second color mean mean2_col, the first color mean mean1_col, the maximum color value max_col, and the minimum color value min_col of the skin color area corresponding to each face in the target skin color area .
  • the following formula can be used to calculate the color-based target noise reduction intensity C_nr:
  • Step 217 Determine a weighted operation result of the target noise reduction intensity based on brightness and the target noise reduction intensity based on color, and use the weighted operation result as the target noise reduction intensity.
  • Step 218 Perform noise reduction processing on the target sub-region based on the target noise reduction intensity to obtain a target image after noise reduction processing.
  • the target noise reduction intensity L_nr based on the brightness is used to perform noise reduction processing on the pixels in the target sub-region to obtain the noise-reduced target image.
  • the target noise reduction intensity Nr after the weighting operation is used to perform noise reduction processing on the pixels in the target sub-region to obtain the noise-reduced target image.
  • Step 219 Output the target image.
  • the color range of each face is counted separately to calculate the average face color of the entire target, the average face color of each face, and the color The maximum value and the minimum color value, thereby determining the color-based target noise reduction intensity based on the face color mean, color maximum and color minimum values; determining the weighted operation of the brightness-based target noise reduction intensity and the color-based target noise reduction intensity As a result, the weighted calculation result is used as the target noise reduction intensity.
  • the skin color region can be divided into different sub-regions according to the skin tone and skin brightness of each face, and different target noise reduction intensities can be determined for different sub-regions.
  • the area uses a larger target noise reduction intensity, and the lighter skin color, the brighter the area uses a smaller target noise reduction intensity, thereby effectively reducing the noise of dark skin areas, neck shadows, face contours and other dark light areas .
  • the method further includes: performing noise statistics on the noise-reduced target image, and determining the skin color region based on the statistical results Noise level; determine whether the noise level belongs to a preset noise interval; if it is, output the target image after noise reduction processing; otherwise, determine the mixing weight according to the noise level, based on the mixing weight on the target image and noise reduction processing The target image after the mixing process is processed, and the target image after the mixing process is output.
  • FIG. 4 is a flowchart of another image noise reduction method provided by the present application.
  • a noise reduction area is selected.
  • the original target image origin_pic is subjected to face detection, keypoints, edge markers, and skin color area selection to determine the skin color area of the face, as well as the investigation area including ears, shoulders, and necks that are similar to the skin color of the face.
  • the skin color area and the investigation area are marked as skin color areas, and the skin color area is the noise reduction area.
  • Perform noise estimation on the target image to determine the overall noise reduction intensity. Noise reduction is performed on the entire face based on the overall noise reduction intensity.
  • the target image NR_pic after noise reduction is obtained by performing local noise reduction on the skin color area based on the brightness of the skin and the color of the skin. Determine the noise level of the skin color region in NR_pic, and when the noise level does not belong to the preset noise interval, acquire the original target image origin_pic and the target image NR_pic after noise reduction processing.
  • the mixed weight blend_percent (0 ⁇ blend_percent ⁇ 100) is determined based on the noise level of the skin color region of NR_pic, and each pixel in the final output target image is a mixture value of origin_pic and NR_pic based on blend_percent (ie blend_percent*NR_pic+(1-blend_percent) *origin_pic).
  • the advantage of this design is that when the noise level after the local noise reduction process in the skin color area is not in the preset noise interval (it may be that the noise reduction transition has lost some details), the mixing is determined according to the noise level of the skin color area after the noise reduction process Weights, and based on the mixed weights, the original target image and the noise-reduced target image are mixed to dynamically adjust the noise distribution of the target image, so that the noise distribution in the final target image is more uniform, more natural and clear Target image.
  • the technical solutions of the embodiments of the present application may be added to the intermediate or final process of ISP (Image Signal Processing) to optimize the photo shooting effect.
  • the technical solutions of the embodiments of the present application can also be used in combination with a multi-frame noise reduction technology to achieve better noise reduction effect in random noise and dark noise reduction scenes.
  • FIG. 5 is a structural block diagram of an image noise reduction device provided by an embodiment of the present application.
  • the device can be implemented by software and/or hardware, and is generally integrated in a terminal, which can effectively suppress the backlight and side light by performing the image noise reduction method.
  • Face light under direct light, dark light and other unfavorable face noise presents a clearer and natural face image.
  • the device includes:
  • the information determination module 510 is used to determine the brightness information of the skin color area in the target image
  • the noise reduction intensity determination module 520 is configured to determine a target sub-region whose brightness is lower than a preset brightness threshold in the skin-color region, and determine a target noise reduction intensity according to the brightness of the target sub-region and the brightness information;
  • the noise reduction processing module 530 is configured to perform noise reduction processing on the target sub-region based on the target noise reduction intensity to obtain a target image after noise reduction processing.
  • An embodiment of the present application provides an image noise reduction device, which determines a target noise reduction intensity according to the brightness and brightness information of the target subregion by determining a target subregion in the skin color region whose brightness is lower than a preset brightness threshold; based on the target reduction
  • the noise intensity performs noise reduction processing on the target sub-region to obtain the target image after noise reduction processing.
  • the target sub-region to be subjected to noise reduction processing is determined based on the brightness
  • the noise reduction intensity is determined according to the brightness of each pixel in the target sub-region and the brightness information of the skin color area
  • the corresponding pixels are determined based on the noise reduction intensity
  • a skin tone area which is used for:
  • the target image Before determining the brightness information of the skin color region in the target image, obtain the target image in the color separation mode, perform face recognition on the target image, and determine the face information contained in the target image, where the face information Including the number of faces, and the outline information of the face, eyebrows, eyes, nose and mouth;
  • the skin area is composed of the investigation area and the facial skin area.
  • an event trigger module is also included.
  • the event trigger module is used to:
  • the skin color area is composed of the investigation area and the facial skin area, perform noise statistics on the target image, and determine the noise levels of the skin color area and the background area based on the statistical results;
  • a local noise reduction event is triggered, where the local noise reduction event is used to indicate the execution of the operation to determine the brightness information of the skin color area in the target image.
  • an overall noise reduction module is also included.
  • the overall noise reduction module is used for:
  • the overall noise reduction intensity of the overall noise reduction processing on the target image is determined according to the noise level, based on The overall noise reduction intensity performs overall noise reduction processing on the target image.
  • the noise reduction intensity determination module 520 is specifically used to:
  • the target noise reduction intensity based on brightness is determined according to the brightness of the target pixel in each target sub-region, the average brightness, the maximum brightness, and the minimum brightness.
  • a color information determination module which is used to:
  • the color of the skin color region is acquired, and the skin color region in the target image is determined according to the color
  • a color-based target drop is determined according to the color of the target skin color area, the first color average value, the maximum color value, and the minimum color value Noise strength.
  • the device further includes:
  • the weighting operation module is used to determine the target noise reduction intensity based on the brightness after determining the target noise reduction intensity based on the color according to the color of the target skin color area, the first color mean, the maximum color value and the minimum color value And the weighted calculation result of the target noise reduction intensity based on color, and using the weighted calculation result as the target noise reduction intensity.
  • an image mixing module is also included.
  • the image mixing module is used to:
  • a mixing weight is determined according to the noise level, and the target image and the noise-reduced target image are mixed based on the mixing weight, and the mixed-processed target image is output.
  • Embodiments of the present application also provide a storage medium containing computer-executable instructions, which when executed by a computer processor are used to perform an image noise reduction method, the method includes:
  • a local noise reduction event is detected and triggered
  • the brightness information includes the average brightness, the maximum brightness, and the minimum brightness
  • An embodiment of the present application provides an image noise reduction device, which determines a target noise reduction intensity according to the brightness and brightness information of the target subregion by determining a target subregion in the skin color region whose brightness is lower than a preset brightness threshold; based on the target reduction
  • the noise intensity performs noise reduction processing on the target sub-region to obtain the target image after noise reduction processing.
  • the target sub-region to be subjected to noise reduction processing is determined based on the brightness
  • the noise reduction intensity is determined according to the brightness of each pixel in the target sub-region and the brightness information of the skin color area
  • the corresponding pixels are determined based on the noise reduction intensity
  • Storage medium any kind of memory device or storage device of various types.
  • storage media is intended to include: installation media such as CD-ROM, floppy disk or tape devices; computer system memory or random access memory such as DRAM, DDR RAM, SRAM, EDO RAM, Rambus RAM, etc. ; Non-volatile memory, such as flash memory, magnetic media (such as hard disk or optical storage); registers or other similar types of memory elements.
  • the storage medium may also include other types of memory or a combination thereof.
  • the storage medium may be located in the first computer system in which the program is executed, or may be located in a different second computer system that is connected to the first computer system through a network such as the Internet.
  • the second computer system may provide program instructions to the first computer for execution.
  • storage medium may include two or more storage media that may reside in different locations (eg, in different computer systems connected through a network).
  • the storage medium may store program instructions executable by one or more processors (eg, embodied as a computer program).
  • a storage medium containing computer-executable instructions provided by the embodiments of the present application is not limited to the image noise reduction operation described above, but can also perform the image noise reduction provided by any embodiment of the present application Related operations in the method.
  • FIG. 6 is a schematic structural diagram of a terminal according to an embodiment of the present application.
  • the terminal includes a memory 610 and a processor 620.
  • the memory 610 is used to store a computer program; the processor 620 reads and executes the computer program stored in the memory 610.
  • the processor 620 implements the following steps when executing the computer program: determining the brightness information of the skin color area in the target image; determining the target sub-area in the skin color area whose brightness is lower than a preset brightness domain value, according to the target sub The brightness of the area and the brightness information determine a target noise reduction intensity; perform noise reduction processing on the target sub-region based on the target noise reduction intensity to obtain a target image after noise reduction processing.
  • the memory and processor listed in the above examples are all components of the terminal, and the terminal may also include other components.
  • a smart phone as an example to illustrate the possible structure of the above terminal.
  • the smart phone may include: a memory 701, a central processing unit (Central Processing Unit, CPU) 702 (also called a processor, hereinafter referred to as CPU), a peripheral interface 703, and an RF (Radio Frequency) circuit 705, audio circuit 706, speaker 711, touch screen 712, power management chip 708, input/output (I/O) subsystem 709, other input/control devices 710, and external ports 704, these components through one or more communication buses or The signal line 707 comes to communicate.
  • CPU Central Processing Unit
  • RF Radio Frequency
  • the illustrated smartphone 700 is only an example of a terminal, and the smartphone 700 may have more or fewer parts than shown in the figure, and two or more parts may be combined, or There can be different component configurations.
  • the various components shown in the figures may be implemented in hardware, software, or a combination of hardware and software, including one or more signal processing and/or application specific integrated circuits.
  • the smart phone integrated with the image noise reduction device provided in this embodiment will be described in detail below.
  • Memory 701 which can be accessed by CPU 702, peripheral interface 703, etc.
  • the memory 701 can include high-speed random access memory, and can also include non-volatile memory, such as one or more disk storage devices, flash memory devices , Or other volatile solid-state storage devices.
  • Peripheral interface 703, which can connect input and output peripherals of the device to CPU 702 and memory 701.
  • the I/O subsystem 709 which can connect input and output peripherals on the device, such as touch screen 712 and other input/control devices 710, to peripheral interface 703.
  • the I/O subsystem 709 may include a display controller 7091 and one or more input controllers 7092 for controlling other input/control devices 710.
  • one or more input controllers 7092 receive electrical signals from other input/control devices 710 or send electrical signals to other input/control devices 710, which may include physical buttons (press buttons, rocker buttons, etc.) ), dial pad, slide switch, joystick, click wheel.
  • the input controller 7092 can be connected to any of the following: a keyboard, an infrared port, a USB interface, and a pointing device such as a mouse.
  • a touch screen 712 which is an input interface and an output interface between the user terminal and the user, and displays visual output to the user, and the visual output may include graphics, text, icons, video, and the like.
  • the display controller 7091 in the I/O subsystem 709 receives electrical signals from the touch screen 712 or sends electrical signals to the touch screen 712.
  • the touch screen 712 detects the contact on the touch screen, and the display controller 7091 converts the detected contact into interaction with the user interface object displayed on the touch screen 712, that is, realizes human-computer interaction, and the user interface object displayed on the touch screen 712 may be running Icons for games, icons connected to the corresponding network, etc.
  • the device may also include a light mouse, which is a touch-sensitive surface that does not display visual output, or an extension of the touch-sensitive surface formed by a touch screen.
  • the RF circuit 705 is mainly used to establish communication between the mobile phone and the wireless network (that is, the network side), and realize data reception and transmission between the mobile phone and the wireless network. For example, sending and receiving short messages, e-mail, etc. Specifically, the RF circuit 705 receives and transmits RF signals, which are also called electromagnetic signals. The RF circuit 705 converts electrical signals into electromagnetic signals or converts electromagnetic signals into electrical signals, and communicates with the communication network and other devices through the electromagnetic signals Communicate.
  • the RF circuit 705 may include known circuits for performing these functions, including but not limited to antenna systems, RF transceivers, one or more amplifiers, tuners, one or more oscillators, digital signal processors, CODEC ( COder-DECoder (codec) chipset, subscriber identity module (Subscriber Identity Module, SIM), etc.
  • CODEC COder-DECoder (codec) chipset
  • subscriber identity module Subscriber Identity Module, SIM
  • the audio circuit 706 is mainly used to receive audio data from the peripheral interface 703, convert the audio data into electrical signals, and send the electrical signals to the speaker 711.
  • the speaker 711 is used to restore the voice signal received by the mobile phone from the wireless network through the RF circuit 705 to a sound and play the sound to the user.
  • the power management chip 708 is used for power supply and power management for the hardware connected to the CPU 702, the I/O subsystem, and the peripheral interface.
  • the terminal provided in the embodiment of the present application may determine the target sub-region to be subjected to noise reduction processing based on the brightness, and determine the noise reduction intensity according to the brightness of each pixel in the target sub-region and the maximum and minimum brightness values of the skin-tone area, Based on the intensity of noise reduction, the corresponding pixels are subjected to noise reduction processing to achieve the local noise reduction effect on the skin color area based on brightness, so that the noise distribution of the skin color area is more uniform.
  • the image noise reduction device, storage medium, and terminal provided in the above embodiments can execute the image noise reduction method provided in any embodiment of the present application, and have corresponding function modules and beneficial effects for performing the method.
  • image noise reduction method provided in any embodiment of the present application.

Abstract

L'invention concerne un procédé et un appareil de débruitage d'image, un support d'enregistrement et un terminal. Le procédé consiste à : déterminer des informations de luminosité d'une zone de couleur de peau dans une image cible ; déterminer, dans la zone de couleur de peau, une sous-zone cible dont la luminosité est inférieure à une valeur de seuil de luminosité prédéfinie et déterminer une intensité de débruitage cible en fonction de la luminosité de la sous-zone cible et des informations de luminosité ; et effectuer, sur la base de l'intensité de débruitage cible, un traitement de débruitage sur la sous-zone cible pour obtenir une image cible ayant fait l'objet d'un traitement de débruitage. Grâce à la solution technique présentée, en déterminant, sur la base de la luminosité, une sous-zone cible à soumettre à un traitement de débruitage, en déterminant l'intensité de débruitage en fonction de la luminosité de chaque point de pixel dans la sous-zone cible et des informations de luminosité d'une zone de couleur de peau, et en effectuant, sur la base de l'intensité de débruitage, un traitement de débruitage sur un point de pixel correspondant, on obtient l'effet de débruitage local de la zone de couleur de peau sur la base de la luminosité, de telle sorte que les bruits dans la zone de couleur de peau sont répartis plus uniformément.
PCT/CN2020/070337 2019-01-04 2020-01-03 Procédé et appareil de débruitage d'image, support d'enregistrement et terminal WO2020140986A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910008658.X 2019-01-04
CN201910008658.XA CN109639982B (zh) 2019-01-04 2019-01-04 一种图像降噪方法、装置、存储介质及终端

Publications (1)

Publication Number Publication Date
WO2020140986A1 true WO2020140986A1 (fr) 2020-07-09

Family

ID=66057927

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/070337 WO2020140986A1 (fr) 2019-01-04 2020-01-03 Procédé et appareil de débruitage d'image, support d'enregistrement et terminal

Country Status (2)

Country Link
CN (1) CN109639982B (fr)
WO (1) WO2020140986A1 (fr)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111881789A (zh) * 2020-07-14 2020-11-03 深圳数联天下智能科技有限公司 肤色识别方法、装置、计算设备及计算机存储介质
CN111950390A (zh) * 2020-07-22 2020-11-17 深圳数联天下智能科技有限公司 皮肤敏感度的确定方法及装置、存储介质及设备
CN112686800A (zh) * 2020-12-29 2021-04-20 北京达佳互联信息技术有限公司 图像处理方法、装置、电子设备及存储介质
CN112861781A (zh) * 2021-03-06 2021-05-28 同辉电子科技股份有限公司 一种智慧照明的子像素排列方式
CN113781330A (zh) * 2021-08-23 2021-12-10 北京旷视科技有限公司 图像处理方法、装置及电子系统
CN114936981A (zh) * 2022-06-10 2022-08-23 重庆尚优科技有限公司 一种基于云平台的场所扫码登记系统
CN116757966A (zh) * 2023-08-17 2023-09-15 中科方寸知微(南京)科技有限公司 基于多层级曲率监督的图像增强方法及系统
CN111950390B (zh) * 2020-07-22 2024-04-26 深圳数联天下智能科技有限公司 皮肤敏感度的确定方法及装置、存储介质及设备

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109639982B (zh) * 2019-01-04 2020-06-30 Oppo广东移动通信有限公司 一种图像降噪方法、装置、存储介质及终端
CN110399802B (zh) * 2019-06-28 2022-03-11 北京字节跳动网络技术有限公司 处理面部图像眼睛亮度的方法、装置、介质和电子设备
CN112417930B (zh) * 2019-08-23 2023-10-13 深圳市优必选科技股份有限公司 一种处理图像的方法及机器人
CN110689496B (zh) * 2019-09-25 2022-10-14 北京迈格威科技有限公司 降噪模型的确定方法、装置、电子设备和计算机存储介质
CN112785533B (zh) * 2019-11-07 2023-06-16 RealMe重庆移动通信有限公司 图像融合方法、图像融合装置、电子设备与存储介质
CN111274952B (zh) * 2020-01-20 2021-02-05 新疆爱华盈通信息技术有限公司 背光人脸图像处理方法、人脸识别方法
CN111507358B (zh) * 2020-04-01 2023-05-16 浙江大华技术股份有限公司 一种人脸图像的处理方法、装置、设备和介质
CN111507923B (zh) * 2020-04-21 2023-09-12 浙江大华技术股份有限公司 一种视频图像的噪声处理方法、装置、设备和介质
CN111476741B (zh) * 2020-04-28 2024-02-02 北京金山云网络技术有限公司 图像的去噪方法、装置、电子设备和计算机可读介质
CN111928947B (zh) * 2020-07-22 2021-08-31 广州朗国电子科技有限公司 基于低精度人脸测温仪的额头温测量方法、装置及测温仪
CN111861942A (zh) * 2020-07-31 2020-10-30 深圳市慧鲤科技有限公司 一种降噪方法及装置、电子设备和存储介质
CN112562034B (zh) * 2020-12-25 2022-07-01 咪咕文化科技有限公司 一种图像生成方法、装置、电子设备和存储介质
CN113610723B (zh) * 2021-08-03 2022-09-13 展讯通信(上海)有限公司 图像处理方法及相关装置

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006018465A (ja) * 2004-06-30 2006-01-19 Canon Inc 画像処理方法および画像処理装置およびコンピュータプログラムおよび記憶媒体
US20100026831A1 (en) * 2008-07-30 2010-02-04 Fotonation Ireland Limited Automatic face and skin beautification using face detection
US20120128376A1 (en) * 2010-11-23 2012-05-24 Han Henry Sun PMD-insensitive method of chromatic dispersion estimation for a coherent receiver
CN105447827A (zh) * 2015-11-18 2016-03-30 广东欧珀移动通信有限公司 图像降噪方法和系统
CN107424125A (zh) * 2017-04-14 2017-12-01 深圳市金立通信设备有限公司 一种图像虚化方法及终端
CN108230270A (zh) * 2017-12-28 2018-06-29 努比亚技术有限公司 一种降噪方法、终端及计算机可读存储介质
CN108391111A (zh) * 2018-02-27 2018-08-10 深圳Tcl新技术有限公司 图像清晰度调节方法、显示装置及计算机可读存储介质
CN109639982A (zh) * 2019-01-04 2019-04-16 Oppo广东移动通信有限公司 一种图像降噪方法、装置、存储介质及终端

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4653059B2 (ja) * 2006-11-10 2011-03-16 オリンパス株式会社 撮像システム、画像処理プログラム
CN103428409B (zh) * 2012-05-15 2017-08-04 深圳中兴力维技术有限公司 一种基于固定场景的视频降噪处理方法及装置
CN105005973B (zh) * 2015-06-30 2018-04-03 广东欧珀移动通信有限公司 一种图像快速去噪的方法及装置
CN106303157B (zh) * 2016-08-31 2020-07-14 广州市百果园网络科技有限公司 一种视频降噪处理方法及视频降噪处理装置
CN106600556A (zh) * 2016-12-16 2017-04-26 合网络技术(北京)有限公司 图像处理方法及装置
CN107808404A (zh) * 2017-09-08 2018-03-16 广州视源电子科技股份有限公司 图像处理方法、系统、可读存储介质及移动摄像设备
CN108989678B (zh) * 2018-07-27 2021-03-23 维沃移动通信有限公司 一种图像处理方法、移动终端

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006018465A (ja) * 2004-06-30 2006-01-19 Canon Inc 画像処理方法および画像処理装置およびコンピュータプログラムおよび記憶媒体
US20100026831A1 (en) * 2008-07-30 2010-02-04 Fotonation Ireland Limited Automatic face and skin beautification using face detection
US20120128376A1 (en) * 2010-11-23 2012-05-24 Han Henry Sun PMD-insensitive method of chromatic dispersion estimation for a coherent receiver
CN105447827A (zh) * 2015-11-18 2016-03-30 广东欧珀移动通信有限公司 图像降噪方法和系统
CN107424125A (zh) * 2017-04-14 2017-12-01 深圳市金立通信设备有限公司 一种图像虚化方法及终端
CN108230270A (zh) * 2017-12-28 2018-06-29 努比亚技术有限公司 一种降噪方法、终端及计算机可读存储介质
CN108391111A (zh) * 2018-02-27 2018-08-10 深圳Tcl新技术有限公司 图像清晰度调节方法、显示装置及计算机可读存储介质
CN109639982A (zh) * 2019-01-04 2019-04-16 Oppo广东移动通信有限公司 一种图像降噪方法、装置、存储介质及终端

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111881789A (zh) * 2020-07-14 2020-11-03 深圳数联天下智能科技有限公司 肤色识别方法、装置、计算设备及计算机存储介质
CN111950390A (zh) * 2020-07-22 2020-11-17 深圳数联天下智能科技有限公司 皮肤敏感度的确定方法及装置、存储介质及设备
CN111950390B (zh) * 2020-07-22 2024-04-26 深圳数联天下智能科技有限公司 皮肤敏感度的确定方法及装置、存储介质及设备
CN112686800A (zh) * 2020-12-29 2021-04-20 北京达佳互联信息技术有限公司 图像处理方法、装置、电子设备及存储介质
CN112686800B (zh) * 2020-12-29 2023-07-07 北京达佳互联信息技术有限公司 图像处理方法、装置、电子设备及存储介质
CN112861781A (zh) * 2021-03-06 2021-05-28 同辉电子科技股份有限公司 一种智慧照明的子像素排列方式
CN113781330A (zh) * 2021-08-23 2021-12-10 北京旷视科技有限公司 图像处理方法、装置及电子系统
CN114936981A (zh) * 2022-06-10 2022-08-23 重庆尚优科技有限公司 一种基于云平台的场所扫码登记系统
CN114936981B (zh) * 2022-06-10 2023-07-07 重庆尚优科技有限公司 一种基于云平台的场所扫码登记系统
CN116757966A (zh) * 2023-08-17 2023-09-15 中科方寸知微(南京)科技有限公司 基于多层级曲率监督的图像增强方法及系统

Also Published As

Publication number Publication date
CN109639982B (zh) 2020-06-30
CN109639982A (zh) 2019-04-16

Similar Documents

Publication Publication Date Title
WO2020140986A1 (fr) Procédé et appareil de débruitage d'image, support d'enregistrement et terminal
CN109272459B (zh) 图像处理方法、装置、存储介质及电子设备
US11443462B2 (en) Method and apparatus for generating cartoon face image, and computer storage medium
CN111418201B (zh) 一种拍摄方法及设备
CN109741280B (zh) 图像处理方法、装置、存储介质及电子设备
CN109146814B (zh) 图像处理方法、装置、存储介质及电子设备
JP7226851B2 (ja) 画像処理の方法および装置並びにデバイス
CN109961453B (zh) 一种图像处理方法、装置与设备
US9621741B2 (en) Techniques for context and performance adaptive processing in ultra low-power computer vision systems
CN109618098B (zh) 一种人像面部调整方法、装置、存储介质及终端
US11138695B2 (en) Method and device for video processing, electronic device, and storage medium
CN110100251B (zh) 用于处理文档的设备、方法和计算机可读存储介质
WO2021057277A1 (fr) Procédé de photographie dans une lumière sombre et dispositif électronique
CN109714582B (zh) 白平衡调整方法、装置、存储介质及终端
CN112887582A (zh) 一种图像色彩处理方法、装置及相关设备
EP4156082A1 (fr) Procédé et appareil de transformation d'image
WO2022121893A1 (fr) Procédé et appareil de traitement d'image, ainsi que dispositif informatique et support de stockage
WO2023056950A1 (fr) Procédé de traitement d'image et dispositif électronique
WO2022052862A1 (fr) Procédé de traitement d'accentuation des contours d'image, et son application
CN112950499B (zh) 图像处理方法、装置、电子设备及存储介质
CN109672829B (zh) 图像亮度的调整方法、装置、存储介质及终端
WO2022111269A1 (fr) Procédé et dispositif pour améliorer des détails vidéo, terminal mobile et support de stockage
CN113486714B (zh) 一种图像的处理方法及电子设备
RU2794062C2 (ru) Устройство и способ обработки изображения и оборудование
RU2791810C2 (ru) Способ, аппаратура и устройство для обработки и изображения

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20736041

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20736041

Country of ref document: EP

Kind code of ref document: A1