WO2023226584A1 - 图像降噪、滤波数据处理方法、装置和计算机设备 - Google Patents

图像降噪、滤波数据处理方法、装置和计算机设备 Download PDF

Info

Publication number
WO2023226584A1
WO2023226584A1 PCT/CN2023/084674 CN2023084674W WO2023226584A1 WO 2023226584 A1 WO2023226584 A1 WO 2023226584A1 CN 2023084674 W CN2023084674 W CN 2023084674W WO 2023226584 A1 WO2023226584 A1 WO 2023226584A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
processed
filter coefficient
pixel
target
Prior art date
Application number
PCT/CN2023/084674
Other languages
English (en)
French (fr)
Inventor
彭程威
易阳
周易
余晓铭
徐怡廷
李峰
左小祥
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Publication of WO2023226584A1 publication Critical patent/WO2023226584A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics

Definitions

  • the present application relates to the technical field of image processing, and in particular to an image noise reduction method, device, computer equipment, storage medium and computer program product, and a filtering data processing method, device, computer equipment, storage medium and computer program product.
  • Image noise reduction technology can reduce image noise information caused by hardware errors and unstable light sources.
  • image noise reduction is required.
  • various noises will be mixed in the collected image sequence.
  • improve the visual effect of the video image improve the compression efficiency of the video image or To save bandwidth, it is necessary to reduce noise in video images.
  • an image noise reduction method device, computer equipment, computer-readable storage medium and computer program product are provided, as well as a filtering data processing method, device, computer equipment, computer-readable storage medium and computer program products.
  • the present application provides an image noise reduction method, executed by a computer device, including: acquiring an image to be processed and a reference image; determining the motion intensity of the block to be processed in the image to be processed according to the reference image; The block to be processed is obtained by dividing the image to be processed; filter coefficient description information matching the motion intensity is obtained, and the filter coefficient description information is used to describe the correspondence between the pixel difference value and the filter coefficient representation value.
  • the filter coefficient representation value is negatively correlated with the pixel difference value, and the degree of change of the filter coefficient representation value is positively correlated with the motion intensity; obtaining the pixel points in the block to be processed and the reference Target pixel difference values between pixels at corresponding positions in the image are determined based on the filter coefficient description information and a target filter coefficient representation value that has a corresponding relationship with the target pixel difference value; and based on the target filter coefficient representation value, determine The target noise reduction image corresponding to the image to be processed.
  • this application also provides an image noise reduction device.
  • the device includes: an image acquisition module for acquiring an image to be processed and a reference image; a motion intensity determination module for determining the motion intensity of a block to be processed in the image to be processed according to the reference image; the block to be processed It is obtained by dividing the image to be processed; a description information acquisition module is used to obtain filter coefficient description information that matches the motion intensity, and the filter coefficient description information is used to describe the difference between the pixel difference value and the filter coefficient representation value.
  • the filter coefficient representation value is negatively correlated with the pixel difference value, and the degree of change of the filter coefficient representation value is positively correlated with the motion intensity;
  • a filter coefficient determination module is used to obtain the The target pixel difference value between the pixel point in the block to be processed and the corresponding position pixel point in the reference image is determined based on the filter coefficient description information and the target filter coefficient representation value that has a corresponding relationship with the target pixel difference value; and
  • a noise reduction image determination module configured to determine the target noise reduction image corresponding to the image to be processed based on the target filter coefficient representation value.
  • this application also provides a computer device.
  • the computer device includes a memory and a processor.
  • the memory stores a computer program.
  • the processor executes the computer program, the steps of the above image noise reduction method are implemented.
  • this application also provides a computer-readable storage medium.
  • the computer readable storage medium has a computer program stored thereon, and when the computer program is executed by a processor, the steps of the above image noise reduction method are implemented.
  • the computer program product includes a computer A program, which when executed by a processor, implements the steps of the above image noise reduction method.
  • this application provides a filtering data processing method, which is executed by a computer device, including: determining multiple reference motion intensity information, and determining the pixel difference distribution range; based on each reference motion intensity information, determining the pixel The representation value of the filter coefficient under multiple pixel difference values within the difference distribution range; establishing the corresponding relationship between each filter coefficient representation value and the corresponding pixel difference value; and representing the filter coefficient determined based on the same reference motion intensity information.
  • the corresponding relationship between the values constitutes the filter coefficient description information, and the filter coefficient description information corresponding to each reference motion intensity information is obtained; wherein the filter coefficient description information is used to perform time domain filtering on the image to be processed.
  • this application also provides a filtering data processing device.
  • the device includes: a reference motion intensity determination module, used to determine multiple reference motion intensity information, and determine the pixel difference distribution range; a representation value determination module, based on each reference motion intensity information, determine the pixel difference distribution range The filter coefficient representation value under multiple pixel difference values within; a correspondence relationship establishment module for establishing a correspondence relationship between each filter coefficient representation value and the corresponding pixel difference value; and a description information determination module for converting the The corresponding relationship between the representative values of the filter coefficients determined by the same reference motion intensity information constitutes the filter coefficient description information, and the filter coefficient description information corresponding to each reference motion intensity information is obtained; wherein, the filter coefficient description information is used when processing the image to be processed. domain filtering.
  • this application also provides a computer device.
  • the computer device includes a memory and a processor.
  • the memory stores a computer program.
  • the processor executes the computer program, the steps of the above filtered data processing method are implemented.
  • this application also provides a computer-readable storage medium.
  • the computer-readable storage medium has a computer program stored thereon, and when the computer program is executed by a processor, the steps of the above filtered data processing method are implemented.
  • the computer program product includes a computer program that implements the steps of the above filtered data processing method when executed by a processor.
  • Figure 1 is an application environment diagram of the image noise reduction method and the filtered data processing method in one embodiment
  • Figure 2 is a schematic flow chart of an image noise reduction method in one embodiment
  • Figure 3 is a schematic interface diagram of an image noise reduction algorithm applied to a video conferencing application in one embodiment
  • Figure 4 is a schematic diagram of dividing an image to be processed in one embodiment
  • Figure 5 is a schematic diagram of a filter function in an embodiment
  • Figure 6 is a schematic flowchart of a specific image noise reduction method in another embodiment
  • Figure 7 is an application architecture diagram of the image noise reduction method in one embodiment
  • Figure 8 is a schematic flowchart of a filtering data processing method in one embodiment
  • Figure 9 is a schematic diagram of the effect of the image noise reduction method in one embodiment
  • Figure 10 is a schematic diagram of the effect of the image noise reduction method in another embodiment
  • Figure 11 is a structural block diagram of an image noise reduction device in one embodiment
  • Figure 12 is a structural block diagram of a filtered data processing device in one embodiment
  • Figure 13 is an internal structure diagram of a computer device in one embodiment
  • Figure 14 is an internal structure diagram of a computer device in one embodiment.
  • the image noise reduction method and filtering data processing method provided by the embodiments of this application can be applied in the application environment as shown in Figure 1.
  • the terminal 102 communicates with the server 104 through the network.
  • the data storage system can store data that needs to be processed by the server 104, for example, it can store target noise reduction images obtained by noise reduction.
  • the data storage system can be integrated on the server 104, or placed on the cloud or other servers.
  • the server can be an independent physical server, or a server cluster or distributed system composed of multiple physical servers. It can also provide cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, and cloud communications. , middleware services, domain name services, security services, CDN, and cloud servers for basic cloud computing services such as big data and artificial intelligence platforms.
  • the terminal can be a smartphone, a tablet, a laptop, a desktop computer, an intelligent voice interaction device, a smart home appliance, a vehicle-mounted terminal, an aircraft, etc., but is not limited to this.
  • the terminal and the server can be connected directly or indirectly through wired or wireless communication methods, which is not limited in this application.
  • the image noise reduction method and filtering data processing method provided by the embodiment of the present application can be executed by the terminal 102, can also be executed by the server 104, or can be executed by the terminal 102 and the server 104 in collaboration.
  • the terminal 102 can collect the image to be processed and send the image to be processed to the server.
  • the server further obtains the reference image corresponding to the image to be processed, and then determines the motion intensity of the block to be processed in the image to be processed based on the reference image, and obtains the motion intensity corresponding to the image to be processed. Matching filter coefficient description information.
  • the filter coefficient description information is used to describe the correspondence between the pixel difference value and the filter coefficient representation value, and obtain the target pixel difference between the pixel point in the block to be processed and the pixel point at the corresponding position in the reference image. value, determine the target filter coefficient representation value corresponding to the target pixel difference based on the filter coefficient description information, determine the target noise reduction image corresponding to the image to be processed based on the target filter coefficient representation value, and finally, the server 104 can reduce the target noise The image is returned to terminal 102.
  • an image noise reduction method is provided, which is executed by a computer device.
  • the computer device can be the terminal 102 in Figure 1, or the server 104 in Figure 1, or It is a system composed of terminal 102 and server 104.
  • the image noise reduction method includes the following steps:
  • Step 202 Obtain the image to be processed and the reference image.
  • the image to be processed refers to the image that needs to be denoised.
  • the image to be processed may be an image collected by a computer device, or an image received by the computer device through a network and collected by other computer devices.
  • the image to be processed can be a video frame in a video.
  • the reference image refers to the image used as a reference for temporal filtering of the image to be processed.
  • the reference image and the image to be processed may include the same foreground object.
  • the reference image may be an image obtained through noise reduction processing.
  • the reference image may be an image obtained by denoising the forward video frame of the image to be processed.
  • the noise reduction process here can be time domain filtering or spatial domain filtering. In other embodiments, the noise reduction process may also be performed by performing time domain filtering first and then performing spatial domain filtering.
  • the computer device can obtain one or more reference images corresponding to the image to be processed, and then can perform filtering processing on the image to be processed based on the reference images, thereby achieving noise reduction on the image to be processed.
  • the plurality mentioned in the embodiments of this application refers to at least two.
  • obtaining the image to be processed and the reference image includes: determining the target video to be denoised; using the video frame in the target video as the image to be processed, and determining the target video from the forward video frame corresponding to the image to be processed frame; obtain the target noise reduction image corresponding to the target video frame, and determine the target noise reduction image corresponding to the target video frame as the reference image corresponding to the image to be processed.
  • the target video to be denoised refers to the video that needs to be denoised.
  • the forward video frame corresponding to the image to be processed refers to the video frame at the time point before the image to be processed in the target video. For example, if the image to be processed is the video frame at the 10th second in the target video, then the video frame at the 10th second in the target video The video frame 10 seconds ago is the forward video frame of the image to be processed.
  • the target noise reduction image corresponding to the target video frame refers to the target noise reduction image obtained when the target video frame is used as the image to be processed. In the embodiment of the present application, recursive filtering can be used for noise reduction.
  • Each frame video frame The target noise reduction image obtained by using the image denoising method of the present application can be saved as a reference image for the backward video frame of the video frame. It can be understood that the backward video frame of a certain video frame, that is, the corresponding video frame The time point is the video frame after this video frame.
  • the computer device can sequentially determine each video frame after the first frame in the target video as an image to be processed. For each image to be processed, the computer device can determine one or more from the forward video frame corresponding to the image to be processed. For multiple target video frames, obtain the target noise reduction image corresponding to the target video frame, and use the target noise reduction image as the reference image corresponding to the image to be processed.
  • a video application can be installed on the computer device.
  • the video application can be used to shoot videos or play and edit videos.
  • the video application can be, for example, a live broadcast application, a video editing application, Video surveillance apps, video conferencing apps, and more.
  • the computer device can use these video frames as images to be processed in sequence, and obtain a reference image of the image to be processed, thereby
  • the target noise reduction image of the image to be processed can be obtained through the image noise reduction method provided by this application, so as to improve the visual quality of the video and obtain a better user experience.
  • the application here can refer to the client installed in the terminal, and the client (also called application client, APP client) refers to the program installed and run in the terminal; the application can also refer to the installation-free Applications are applications that can be used without downloading and installing them. Such applications are also commonly known as applets, which usually run on the client as subprograms; applications can also refer to web applications opened through a browser; etc. wait.
  • FIG. 3 it is a schematic interface diagram of the application of the image noise reduction algorithm of the present application to a video conferencing application.
  • the video conference application can use the image noise reduction method provided by this application to reduce the noise of the conference scene.
  • Each video frame in the generated conference video is subjected to noise reduction processing.
  • the computer device when it selects the target video frame, it can use one or a preset number of video frames adjacent to the image to be processed in the forward video frame of the image to be processed as the target video frame, where the preset number Less than the preset threshold, the preset number can be, for example, 2.
  • the preset number can be, for example, 2.
  • the image to be processed is the fifth video frame in the target video frame
  • the third video frame and the fourth video frame can be used as the target video frame. , respectively obtain the target noise reduction images of the third video frame and the fourth video frame as the reference image of the fifth video frame.
  • Step 204 Determine the motion intensity of the block to be processed in the image to be processed based on the reference image; the block to be processed is obtained by dividing the image to be processed.
  • the block to be processed refers to the image block obtained by dividing the image to be processed. Multiple image blocks can be obtained by dividing the image to be processed, and each image block can be used as a block to be processed.
  • the motion intensity can represent the amount of movement of the block to be processed relative to the reference image. The motion intensity is negatively correlated with the noise intensity of the block to be processed relative to the reference image, and is positively correlated with the degree of difference of the block to be processed relative to the reference image.
  • noise information presents a Gaussian distribution in the time domain
  • the noise appearing at a certain position in the current frame in the video may not exist at the same position in the forward frame or the intensity value is weak
  • the current frame 0.05*previous two frames+0.05*previous frame+0.9*current frame, which can alleviate the smear problem.
  • the motion intensity of different areas in the video frame is often very different, for example, in the picture at a certain moment, only the characters are moving, while the background is still. At this time, if the motion intensity is calculated for the entire frame of the image , a weaker temporal filtering intensity will be selected, resulting in poor noise reduction in the background area.
  • the image to be processed can be divided to obtain multiple image blocks, each image block is regarded as a block to be processed, and the motion intensity of each block to be processed is estimated based on the reference image, so that different image blocks can be processed in different image blocks.
  • different temporal filters are used, so that when the character moves, the fixed background can still Has good noise reduction effect.
  • the computer device may use a uniform dividing method to evenly divide the image to be processed to obtain multiple blocks to be processed. Specifically, if the height and width of the image to be processed are H and W respectively, and the entire image is divided into m rows and n columns, then the height and width of each block are H/m and W/n respectively.
  • Figure 4 it is a schematic diagram of dividing an image to be processed in a specific embodiment. Refer to Figure 4, in which each small grid is a block to be processed. It can be understood that in other embodiments, the computer device can also perform non-uniform or overlapping division on the image to be processed. This application does not limit the specific division methods.
  • the server when there are multiple reference images, for each block to be processed in the image to be processed, the server performs motion intensity calculation based on each reference image, so that for each block to be processed, multiple motion intensities can be obtained .
  • Step 206 Obtain filter coefficient description information that matches the motion intensity.
  • the filter coefficient description information is used to describe the correspondence between the pixel difference value and the filter coefficient representation value; where the filter coefficient representation value is negatively correlated with the pixel difference value, and the filter coefficient The degree of change in the coefficient representation value is positively correlated with exercise intensity.
  • the filter coefficient representation value refers to the value that characterizes the filter coefficient.
  • the filter coefficient representation value can be the filter coefficient value or the value obtained by performing a certain calculation on the filter coefficient value.
  • the above formula represents the time domain filtering result at this moment, which is equal to the weighted fusion of the input value at this moment and the time domain filtering result at the previous moment.
  • the weighting coefficient k ranges from 0 to 1, and has the following relationship: current input Y (i, The larger the pixel difference between j) and the previous time domain filtering result Y out (t-1) (i, j) (the pixel difference here represents the absolute value of the difference between the two), the smaller k will be. On the contrary, the larger k is, the filter coefficient k is negatively correlated with the difference between Y out (t-1) (i, j) and Y (i, j) .
  • the embodiment of the present application The corresponding relationship between the pixel difference value and the filter coefficient representation value can be established and saved as the filter coefficient description information, and then the filter coefficient representation value is determined based on the filter coefficient description information, and the processing time of the image to be processed is determined through the filter coefficient representation value.
  • the time domain filtering result obtained by domain filtering.
  • the degree of change of the filter coefficient representation value in different filter coefficient description information is different, and the degree of change of the filter coefficient representation value is set to be positively correlated with the exercise intensity, where
  • the degree of change of the filter coefficient representation value refers to the degree of change of the filter coefficient representation value as the pixel difference changes. That is, the greater the motion intensity, the greater the change of the filter coefficient representation value.
  • the filter coefficient representation value changes with the pixel difference. As the value increases, it decreases rapidly; conversely, the smaller the motion intensity, the smaller the change in the filter coefficient representation value. At this time, the filter coefficient representation value can weaken more slowly as the pixel difference value increases.
  • the filter coefficient description information may be a filter function, in which the filter coefficient value is the dependent variable and the pixel difference value is the independent variable.
  • the filter function may be the decreasing exponent of the filter coefficient with respect to the pixel difference value. function.
  • Computer equipment can divide the motion intensity into multiple intervals and set corresponding filter functions in each interval. The stronger the motion, the steeper the curve of the filter function. This can avoid smear in the motion area and image blocks with weak motion. , using a relatively gentle filter curve, so that better noise reduction effects can be achieved on a relatively fixed background.
  • the filter function may be an exponential function. For example, refer to Figure 5, which is a schematic diagram of a filter function in one embodiment.
  • the exercise intensity is divided into two intervals, and those smaller than the preset exercise intensity threshold are divided into small exercise intensity intervals. Greater than the exercise intensity is divided into large exercise intensity intervals.
  • the designed filter function is shown as the dotted line in Figure 5.
  • the designed filter function is shown as the solid line in Figure 5. It can be seen from Figure 5 that the function curve corresponding to large exercise intensity is steeper than the function curve corresponding to small exercise intensity.
  • a plurality of filter coefficient description information is pre-stored in the computer device, and each filter coefficient description information corresponds to a reference motion intensity information.
  • the computer device can determine the relationship between the block to be processed and the block to be processed from the plurality of reference motion intensity information. The motion intensity matches the reference motion intensity information, and then obtains the filter coefficient description information corresponding to the reference motion intensity information as the filter coefficient description information matching the motion intensity.
  • multiple motion intensities can be obtained for each block to be processed.
  • the computer device needs to obtain a matching The filter coefficient description information.
  • Step 208 Obtain the target pixel difference between the pixel in the block to be processed and the corresponding pixel in the reference image, and determine the target filter coefficient representation value corresponding to the target pixel difference based on the filter coefficient description information.
  • the computer device can calculate the difference between the pixel and the pixel at the corresponding position in the reference image, and use the absolute value of the difference as the target pixel difference, and then based on The filter coefficient description information determines the target filter coefficient representation value corresponding to the target pixel difference value.
  • the computer device when there are multiple reference images, since each block to be processed corresponds to multiple filter coefficient description information, for the target pixel difference value of each pixel in the block to be processed, the computer device can be based on each The filter coefficient description information determines the target filter coefficient representation value corresponding to the target pixel difference value, so that each pixel point can obtain multiple target filter coefficient representation values.
  • the computer device can substitute the target pixel value into the filter function, and the calculated function value is the target filter coefficient representation value corresponding to the target pixel value.
  • Step 210 Determine the target noise reduction image corresponding to the image to be processed based on the target filter coefficient representation value.
  • the computer device can determine an intermediate processed image obtained by performing time domain filtering on the image to be processed based on the target filter coefficient representation value corresponding to each pixel point, and then obtain the target noise reduction image corresponding to the image to be processed based on the intermediate processed image.
  • the computer device can refer to the above formula (1) to perform processing on each pixel based on the target filter coefficient representation value corresponding to each pixel in the block to be processed. Domain filtering is used to obtain the time domain filtering result of each pixel point. The time domain filtering result of each pixel point is used as the current pixel value in the image to be processed. After updating the image to be processed, the intermediate processed image is obtained.
  • the computer device can directly use the intermediate processed image as the target noise reduction image corresponding to the image to be processed.
  • the computer device can continue to use spatial domain filtering, and use the image obtained by the spatial domain filtering process as Target noise reduction image.
  • the spatial filtering here is a smoothing method. Its principle is that the pixel values of natural images are relatively smooth and continuous in space. The image obtained is a natural image with noise added. The purpose of using spatial filtering is to eliminate unsmooth noise. , to obtain smooth natural images.
  • spatial filtering may use at least one of Gaussian filtering and bilateral filtering.
  • the computer device can obtain a trained deep learning model for spatial filtering, input the intermediate processed image into the deep learning model, and output the target denoised image through the deep learning model.
  • the deep learning model for spatial filtering can be trained using supervised training methods. The input samples during the training process are the original images without spatial filtering, and the training labels are the target images obtained through spatial filtering.
  • the motion intensity of the block to be processed in the image to be processed is determined based on the reference image, the filter coefficient description information matching the motion intensity is obtained, and the pixels in the block to be processed and the corresponding pixels in the reference image are obtained.
  • the target pixel difference between The filter coefficient representation value determines the target noise reduction image corresponding to the image to be processed based on the target filter coefficient representation value. Since the block to be processed is obtained by dividing the image to be processed, different blocks to be processed in the image to be processed can be determined differently.
  • the target filter coefficient representation value can be accurately matched with the motion of each area in the image, avoiding the problem of poor noise reduction effect in some areas caused by estimating the motion intensity of the entire image, and improving the performance to be Process the noise reduction effect of the image.
  • the filter coefficient description information is used to describe the correspondence between the pixel difference value and the filter coefficient representation value, and the filter coefficient representation among them The value is negatively correlated with the pixel difference, and the degree of change in the filter coefficient representation value is positively correlated with the motion intensity. Therefore, for each pixel value, the target filter coefficient representation value matching the pixel can be obtained, further improving the image quality to be processed. noise reduction effect.
  • the above method further includes: determining multiple reference motion intensity information, and determining a pixel difference distribution range; and determining multiple pixel difference values within the pixel difference distribution range based on each reference motion intensity information.
  • the corresponding filter coefficient representation value under Each reference motion intensity information corresponds to filter coefficient description information; obtaining filter coefficient description information that matches the motion intensity includes: determining target motion intensity information that matches the motion intensity from multiple reference motion intensity information, and converting the target motion intensity information into The corresponding filter coefficient description information is determined as filter coefficient description information matching the motion intensity.
  • the motion intensity reference information refers to information used as a reference to determine the filter coefficient description information.
  • the exercise intensity reference information can be an exercise intensity interval or a specific numerical value.
  • the pixel difference distribution range refers to the distribution range of all possible pixel difference values, and the pixel difference distribution range can be [0, 255].
  • the computer device determines multiple reference motion intensity information and determines the pixel difference distribution range, for each reference motion intensity information, it ensures that the filter coefficient representation value is negatively correlated with the pixel difference value, and the filter coefficient representation
  • the computer device can establish a correspondence between each filter coefficient representation value and the respective corresponding pixel difference value, and from these correspondences, the included filter coefficient representation value is determined based on the same reference motion intensity information.
  • the filter coefficient description information is used as the filter coefficient description information corresponding to the reference motion intensity information, thereby obtaining the filter coefficient description information corresponding to each reference motion intensity information.
  • the computer device can store the corresponding relationship in the following table form, where the index value of the table is the pixel difference value, refer to Table 1, for example, when the target pixel When the difference is 3, the representative value of the filter coefficient is 0.88.
  • the computer device can further establish a correspondence between the reference motion intensity information and the corresponding filter coefficient description information. Based on this correspondence, when it is necessary to obtain filter coefficient description information that matches the motion intensity of the block to be processed, the computer device can obtain from Target motion intensity information matching the motion intensity is determined from the plurality of reference motion intensity information, and filter coefficient description information corresponding to the target motion intensity information is determined as filter coefficient description information matching the motion intensity.
  • the reference exercise intensity information may be a specific reference exercise intensity, which is a specific number. value, then when the computer device determines the target exercise intensity information that matches a certain exercise intensity, it can determine the reference exercise intensity with the smallest difference from the exercise intensity among these reference exercise intensities as the target reference exercise intensity, and compare it with the target reference exercise intensity.
  • the filter coefficient description information corresponding to the motion intensity is determined as the filter coefficient description information matching the motion intensity. For example, assume that the reference motion intensity information includes 10, 20, and 30. If the motion intensity of a certain block to be processed is 12, it can be determined that its corresponding reference motion intensity information is 10, and then the filter coefficient description information corresponding to 10 is determined. Descriptive information for filter coefficients matching the motion intensity.
  • the computer device can set a larger number of reference motion intensities and establish a corresponding relationship between each filter coefficient description information and multiple reference motion intensities, thereby making the established correspondence more accurate.
  • the computer equipment can set 10, 12, 14, 16, 18, 20, 22, 24, 26, 28, 30, establish a corresponding relationship between 10, 12, 14, 16 and the first filter coefficient description information, and set 18, 20, 22, 24, and 26 establish a corresponding relationship with the second filter coefficient description information, and 28 and 30 establish a corresponding relationship with the third filter coefficient description information.
  • the filter coefficient description information describes the correspondence between the pixel difference value and the filter coefficient representation value under each reference motion intensity information
  • the processing can be directly performed
  • the pixel difference value corresponding to the pixel point in the image is used as an index, and the corresponding filter coefficient representation value is queried from the filter coefficient description information, which avoids obtaining the filter coefficient through complex calculations and improves the filtering efficiency.
  • determining the filter coefficient representation value under multiple pixel difference values within the pixel difference value distribution range based on each reference motion intensity information respectively includes: determining the pixel difference based on each reference motion intensity information respectively.
  • Y out (t-1) (i,j) -Y (i,j) is the pixel difference value
  • k is the filter coefficient. It can be seen from formula (2) that if (Y out (t -1) (i,j) -Y (i,j) )*k, then the filtering result at the current moment can be calculated by simple addition and subtraction, which can further improve the filtering efficiency.
  • the computer device can first determine the target filter coefficient under multiple pixel difference values within the pixel difference value distribution range based on each reference motion intensity information. When determining the target filter coefficient, ensure that the filter coefficient representation value is consistent with the pixel difference value. There is a negative correlation, and the degree of change in the representation value of the filter coefficient is positively correlated with the exercise intensity.
  • each target filter coefficient After determining the target filter coefficient, multiply each target filter coefficient by its corresponding pixel difference value to obtain the filter coefficient representation value under each pixel difference value, that is, (Y out (t-1) (i,j ) -Y (i,j) )*k as the filter coefficient representation value.
  • the filter coefficients are stored in Table 1
  • the representation value of the filter coefficient obtained by multiplying each filter coefficient by its corresponding pixel difference value is shown in Table 2 below.
  • the index value is also Pixel difference value, for example, assuming that the target pixel difference value is 4, the filter coefficient representation value can be obtained by looking up the table as 3.36.
  • the pixel difference multiplied by the target filter coefficient here is the absolute value of the pixel difference, so (Y out (t-1) (i,j) -Y (i,j) )*k Whether the actual calculation result is a positive value or a negative value needs to be determined based on the size relationship between the pixel value of the pixel and the pixel value of the corresponding pixel in the reference image.
  • the method is based on the difference between the pixel value of the pixel and the pixel value of the corresponding pixel in the reference image.
  • the size relationship determines the time domain filtering result of the pixel, including: when the pixel value of the pixel is greater than or equal to the pixel value of the corresponding position in the reference image, subtract the target filter coefficient representation value from the pixel value of the pixel, and obtain The time domain filtering result of the pixel; when the pixel value of the pixel is less than or equal to the pixel value of the corresponding pixel in the reference image, add the pixel value of the pixel to the target filter coefficient representation value to obtain the time domain filtering of the pixel result.
  • the computer equipment subtracts the target filter coefficient representation value from the pixel value of the pixel to obtain the time domain filtering result of the pixel, or adds the target pixel value to the pixel value.
  • the filter coefficient representation value is used to obtain the time domain filtering result of the pixel point. It can be seen that the time domain filtering result at this time is Y out (t-1) (i,j) or Y (i,j) .
  • the computer device uses the time domain filtering results to update the pixel values in the image to be processed, that is, an intermediate processed image is obtained. Based on the intermediate processed image, the computer device can determine The target noise reduction image corresponding to the image to be processed. Specific reference is made to the description in the above embodiments.
  • the filter coefficient representation value under each pixel difference value is obtained, and the complex function operation can be converted into a simple addition and subtraction calculation.
  • the filtering efficiency in the time domain filtering process is further improved, thereby improving the noise reduction efficiency.
  • multiple reference exercise intensity information is determined: the exercise intensity distribution range is divided into multiple exercise intensity intervals, and each exercise intensity interval is used as a reference exercise intensity information; and the corresponding exercise intensity information is determined from the multiple reference exercise intensity information.
  • the target motion intensity information matching the motion intensity determines the filter coefficient description information corresponding to the target motion intensity information as the filter coefficient description information matching the motion intensity, including: determining the target interval to which the motion intensity belongs from multiple motion intensity intervals, and The filter coefficient description information corresponding to the target interval is determined as the filter coefficient description information matching the motion intensity.
  • the exercise intensity distribution range refers to the distribution range of all possible exercise intensities.
  • the computer device can divide the exercise intensity distribution range into multiple intervals to obtain multiple exercise intensity intervals, use each exercise intensity interval as a reference exercise intensity information, and establish a relationship between the exercise intensity interval and the filter coefficient description information. and then after determining the motion intensity of the block to be processed, the computer device can determine which motion intensity interval the motion intensity belongs to, determine the motion intensity interval to which the motion intensity belongs as the target interval, and obtain the motion intensity interval corresponding to the target interval. There is corresponding filter coefficient description information, and the filter coefficient description information is determined as filter coefficient description information matching the motion intensity.
  • the exercise intensity distribution range can be divided into three intervals, namely [a, c], (c, d), [d, b], where, a ⁇ c ⁇ d ⁇ b, if a certain motion intensity of the block to be processed belongs to (c, d), then (c, d) can be determined as the target interval, and its corresponding filter coefficient description information can be determined as related to the motion. Intensity matching filter coefficient description information.
  • the matching filter coefficient description information can be determined according to the exercise intensity interval to which the exercise intensity belongs. The accuracy of the determined filter coefficient description information is improved.
  • the image noise reduction method before determining the motion intensity of the block to be processed in the image to be processed based on the reference image, the image noise reduction method further includes: converting the three primary color channel data in the image to be processed into brightness channel data, chroma channel data and Concentration channel data and extract the brightness channel data, convert the three primary color channel data in the reference image into brightness channel data, chroma channel data and concentration channel data and extract the brightness channel data; determine the to-be-processed image based on the reference image
  • Processing the motion intensity of the block includes: determining the motion intensity of the block to be processed based on the brightness channel data of the reference image and the brightness channel data of the block to be processed; obtaining the distance between the pixels in the block to be processed and the corresponding pixels in the reference image
  • the target pixel difference value includes: obtaining the pixel difference value of the pixel point in the block to be processed and the corresponding position pixel point in the reference image under the brightness channel data, and obtaining the target pixel difference value; based on the target filter coefficient
  • the three primary color channels refer to the three channels of R (red, red), G (green, green) and B (blue, blue). Display technology achieves almost any color of visible light by combining the three primary colors of red, green and blue with different intensities.
  • the RGB model In image storage, the way to record images by recording the red, green, and blue intensity of each pixel is the RGB model.
  • PNG and BMP are based on the RGB model.
  • the YUV model also known as the Luminance-Chroma Model (Luma-ChromaModel). It is a model that records images by converting three RGB channels into one channel representing brightness (Y, also known as Luma) and two channels representing chroma (UV, also known as Chroma) through mathematical conversion.
  • Y represents the brightness channel value
  • U represents the chroma channel value
  • V represents the concentration channel value
  • R represents the R channel value
  • G represents the G channel value
  • B represents the B channel value.
  • the U channel and V channel can be converted in the same way by referring to the Y channel.
  • the amplified values are radiated in all directions. After such conversion, the floating point operation becomes integer multiplication and displacement operation, and the displacement operation is very efficient. , the operation of integer multiplication will also be more efficient than floating point operation, so the obtained Y channel data, U channel data and V channel data are all integer values.
  • the computer device can convert the three primary color channel data in the image to be processed into Y channel data, U channel data and V channel data, and then only Extract the Y channel data and perform noise reduction processing on the Y channel data.
  • the three primary color channel data in the reference image it is also converted into Y channel data, U channel data and V channel data. Only the Y channel data is extracted, so that Determine the motion intensity of the brightness channel data of the block to be processed based on the brightness channel data of the reference image, obtain the pixel difference between the pixels in the block to be processed and the corresponding pixels in the reference image under the brightness channel data, and obtain the target pixel difference.
  • the target noise reduction image obtained by denoising the brightness channel data of the image to be processed is determined.
  • the image to be processed is converted from the RGB domain to the YUV domain, and then only the brightness channel Y is subjected to noise reduction processing, which saves the amount of calculation in the noise reduction process and improves the image noise reduction efficiency.
  • determining the target denoising image obtained by denoising the brightness channel data of the image to be processed includes: based on the target filter coefficient representation value, determining the target denoising image in the image to be processed.
  • the computer device since the brightness channel data in the image to be processed is extracted, after the computer device determines the target filter coefficient representation value of each pixel point, it can determine the intermediate value obtained by performing time domain filtering on the brightness channel data in the image to be processed. Process the data. Further, you can perform spatial noise reduction based on the intermediate processing data to obtain the target brightness data in the brightness channel. Finally, combine the target brightness data with the chroma channel data and concentration channel data separated from the image to be processed, and then convert is the three primary color channel data, that is, the target noise reduction image is obtained.
  • the Y channel data, U channel data and V channel data are further combined and converted into RGB data, and the obtained target noise reduction image can be more meet needs well.
  • the above image noise reduction method also includes: determining the brightness representation value of the block to be processed based on the brightness channel data of the block to be processed; when the brightness representation value is less than or equal to the preset brightness threshold, enter the reference image according to the reference image.
  • the brightness channel data of the block to be processed and the brightness channel data of the block to be processed determine the motion intensity of the block to be processed; when the brightness representation value is greater than the preset brightness threshold, the brightness channel data of the block to be processed is used as intermediate processing data, and enters the process based on The step of intermediately processing the data to perform spatial domain noise reduction to obtain the target brightness data corresponding to the image to be processed.
  • the brightness characterization value is used to characterize the overall brightness of the block to be processed.
  • the brightness representation value can be obtained by counting the Y channel values of each pixel in the block to be processed.
  • the statistics here can be one of summation, average or median.
  • the brightness representation value can be calculated by the following formula (6):
  • the brightness representation value of the block to be processed is obtained through statistics, and no temporal noise reduction processing is performed on the blocks to be processed whose brightness representation value is higher than a certain threshold, in order to achieve the purpose of saving performance.
  • Figure 6 is a specific flow diagram of an image noise reduction method, which specifically includes the following steps:
  • Step 602 Convert RGB to YUV and extract Y channel.
  • the computer device can convert the image to be processed from RGB format to YUV format and extract the Y channel data therein, and convert the reference image from RGB format to YUV format and extract the Y channel data therein.
  • Step 604 perform segmentation.
  • the computer device can divide the image to be processed to obtain multiple blocks to be processed.
  • Step 606 Determine whether the average brightness value within the block exceeds the threshold. If yes, proceed to step 612. If not, proceed to step 608.
  • the computer device calculates the brightness mean value of each block to be processed. For each block to be processed, if the brightness mean value is greater than the brightness threshold, step 612 is entered. If the brightness mean value is less than or equal to the brightness threshold value, step 608 is entered.
  • Step 608 Estimate motion intensity within the block.
  • the computer device may determine the motion intensity of the block to be processed based on the Y channel data of the reference image.
  • Step 610 Select different time domain filters in different blocks according to motion intensity.
  • the computer device can obtain filter coefficient description information that matches the motion intensity, so that for each image in the block to be processed whose brightness mean value is less than or equal to the brightness threshold, For a pixel, the computer device can obtain the target pixel difference between the pixel and the pixel at the corresponding position in the reference image, and determine the target filter coefficient representation value that corresponds to the target pixel difference based on the filter coefficient description information, thereby obtaining The target filter coefficient representation value of each pixel point. Based on the target filter coefficient representation value, the computer device can determine the time domain filtering result obtained by performing time domain filtering on the Y channel data of each pixel point of the block to be processed. The time domain filtering result of each pixel point is The domain filtering results constitute the intermediate processing data of the block to be processed.
  • Step 612 Combine the blocks.
  • Step 614 Perform spatial filtering.
  • the computer device may combine the intermediate processing data of each to-be-processed block, including the intermediate processing data of the to-be-processed blocks whose brightness average value is greater than the preset brightness threshold, and the intermediate processing data of the to-be-processed blocks whose brightness average value is less than or equal to the preset brightness threshold. data, perform spatial noise reduction on the combined intermediate processed data, and obtain the target brightness data corresponding to the image to be processed.
  • the intermediate processing data is the original brightness channel data of the block to be processed.
  • Step 616 Combine the Y and UV channels and complete the conversion of YUV to RGB.
  • the computer device combines the target brightness data with U channel data and V channel data, and then converts it into RGB format, and finally obtains the target noise reduction image.
  • the RGB format data is converted into YUV format data
  • only the Y channel data is subjected to noise reduction processing, and no time domain noise reduction is performed on the blocks to be processed whose average brightness is greater than the preset brightness threshold. While reducing the noise effect, it saves the performance of computer equipment and avoids waste of performance.
  • determining the motion intensity of the block to be processed in the image to be processed based on the reference image includes: determining the degree of difference of the block to be processed relative to the reference image, and the noise intensity of the block to be processed relative to the reference image; based on the degree of difference and Noise intensity determines the motion intensity of the block to be processed; motion intensity is positively correlated with the degree of difference, and motion intensity is negatively correlated with noise intensity.
  • the degree of difference is used to characterize the difference between the block to be processed and the reference image.
  • the greater the difference the greater the degree of difference.
  • the noise intensity is used to characterize the noise level of the block to be processed relative to the reference image.
  • the greater the noise the greater the noise intensity.
  • the degree of difference can be obtained by counting the pixel differences between the moving pixels in the block to be processed and the reference image.
  • the statistics can be summation, average or median values.
  • the noise intensity can be calculated by The pixel difference between the noise pixels in the block to be processed and the reference image is calculated in the same statistical manner as the difference degree.
  • moving pixels and noise pixels in the block to be processed can be distinguished by the magnitude of change relative to the reference image. For example, pixels with a pixel difference greater than a preset threshold can be determined as moving pixels. pixels, and pixels whose pixel difference value is greater than the preset threshold are determined as noise pixels.
  • the computer device can determine the degree of difference of the block to be processed relative to the image block at the corresponding position in the reference image, and the noise of the block to be processed relative to the image block at the corresponding position in the reference image.
  • Strength based on the degree of difference and noise intensity, determines the motion intensity of the block to be processed.
  • there is a positive correlation between the motion intensity and the degree of difference and there is a negative correlation between the motion intensity and the noise intensity.
  • the positive correlation refers to: when other conditions remain unchanged, the two variables change in the same direction.
  • the other variable also changes from large to small.
  • the positive correlation here means that the direction of change is consistent, but it does not require that when one variable changes a little, the other variable must also change.
  • variable b can be 100 when variable a ranges from 10 to 20, and to 120 when variable a ranges from 20 to 30. In this way, the changing directions of a and b are that when a becomes larger, b also becomes larger. But when a is in the range of 10 to 20, b may not change.
  • Negative correlation refers to: when one variable changes from large to small, the other variable also changes from small to large, that is, the direction of change of the two variables is opposite.
  • the motion intensity of the block to be processed can be determined based on the degree of difference and noise intensity, and the operation There is a positive correlation between the motion intensity and the degree of difference, and a negative correlation between the motion intensity and the noise intensity.
  • the motion intensity can accurately reflect the noise of the image to be processed and improve the accuracy of image noise reduction.
  • determining the difference degree of the block to be processed relative to the reference image, and the noise intensity of the block to be processed relative to the reference image includes: obtaining each pixel point in the block to be processed, relative to the corresponding position pixel point in the reference image pixel difference value; based on the pixel difference value corresponding to each pixel point, determine the noise pixel point and motion pixel point in the block to be processed; among them, the motion pixel point is a pixel point whose pixel difference value is greater than the preset difference threshold, and the noise pixel point is It is the pixel point whose pixel difference value is less than or equal to the preset difference threshold; counting the pixel difference value of each noise pixel point to obtain the noise intensity, and counting the pixel difference value of each moving pixel point to obtain the degree of difference.
  • the pixel corresponding to its position in the reference image refers to the pixel with the same pixel coordinates as the pixel.
  • the pixel coordinates of a point are (x1, y1), and the pixel coordinates of the pixel point corresponding to its position in the reference image are also (x1, y1) in the reference image.
  • the difference threshold N can be preset. For each pixel in the block to be processed, when the pixel is compared to the corresponding pixel in the reference image, When the pixel difference value is less than or equal to the preset difference threshold, it means that the pixel value of the pixel at that position has little difference in the previous and subsequent frames, and it is likely to be a noise signal. Then the computer device can determine the pixel as noise.
  • the computer device can move the pixel Points are determined as motion pixels.
  • the computer device can calculate the pixel difference values of each noise pixel point in the block to be processed to obtain the noise intensity. For example, the computer device can add the pixel difference values of each noise pixel point to obtain the noise intensity.
  • the computer device can count the pixel difference values of each moving pixel point in the block to be processed to obtain the degree of difference. For example, the computer device can add the pixel difference values of each moving pixel point to obtain the degree of difference.
  • the pixel difference refers to the absolute difference. , that is, assuming that the pixel value of a certain pixel in the block to be processed is X, and the pixel value of the pixel corresponding to the position of the pixel in the reference image is Y, then the pixel difference value is
  • the noise pixels and the motion pixels are determined based on the relationship between the pixel difference value and the preset difference threshold, and then the noise intensity can be obtained by counting the pixel difference values of each noise pixel point, and the noise intensity can be obtained by counting the motion pixel points.
  • the degree of difference can be obtained by the pixel difference value. Since the pixel difference value can reflect whether the pixel points at each position have moved, the accuracy of the motion intensity calculation is improved.
  • determining the target noise reduction image corresponding to the image to be processed based on the target filter coefficient representation value includes: based on the target filter coefficient representation value, determining an intermediate processed image obtained by performing time domain filtering on the image to be processed, and converting the intermediate processed image into
  • the images are used as input images and guide images respectively; the input image is down-sampled to obtain the first sampled image, and the guide image is down-sampled to obtain the second sampled image; based on the second sampled image, guide filtering is performed on the first sampled image to obtain the target Image; upsample the target image according to the size of the input image to obtain a target noise reduction image with the same size as the input image.
  • the computer device uses the intermediate processed image as the input image and the guide image respectively, and further downsamples the input image so that the input image is reduced according to the target scaling ratio to obtain the first sampled image.
  • the guide image is downsampled so that the guide image is reduced according to the target scaling ratio to obtain the second sampled image, and then the first sampled image is guided filtered based on the second sampled image to obtain the target image.
  • the target image is still a reduced size image, so , the computer device further upsamples the target image according to the size of the input image, so that the target image is enlarged according to the target scaling ratio, thereby obtaining a target noise reduction image with the same size as the input image.
  • the image noise reduction method of the present application can be applied in the architecture shown in Figure 7.
  • the image sequence collected by the camera is sequentially used as the image to be processed, and the target noise reduction method is obtained through the image noise reduction method provided by the present application.
  • the video is encoded to obtain the encoded data, and the encoded data is sent to the cloud.
  • the cloud decodes the video data
  • the decoded video stream is displayed to the user, and the computer device can also decode the encoded data locally and send it to the user.
  • the decoded video stream is displayed to the user.
  • the camera can be a built-in camera of a computer device or an external camera.
  • the original image collected by the camera contains noise signals.
  • this application also provides a filtering data processing method, which is executed by a computer device.
  • the computer device can be the terminal 102 in Figure 1 or the server in Figure 1 104, it can also be a system composed of terminal 102 and server 104.
  • the filtered data processing method includes the following steps:
  • Step 802 Obtain multiple reference motion intensity information and determine the pixel difference distribution range.
  • Step 804 Determine filter coefficient representation values for multiple pixel difference values within the pixel difference value distribution range based on each reference motion intensity information.
  • the motion intensity reference information refers to information used as a reference to determine the filter coefficient description information.
  • the exercise intensity reference information can be an exercise intensity interval or a specific numerical value.
  • the pixel difference distribution range refers to the distribution range of all possible pixel difference values, and the pixel difference distribution range can be [0, 255].
  • the filter coefficient representation value under each possible pixel difference value within the pixel difference value distribution range is determined.
  • Step 806 Establish a correspondence between each filter coefficient representation value and its corresponding pixel difference value.
  • Step 808 Compose filter coefficient description information with corresponding relationships of filter coefficient representation values determined based on the same reference motion intensity information, and obtain filter coefficient description information corresponding to each reference motion intensity information.
  • the filter coefficient description information is used to perform time domain filtering on the image to be processed.
  • the computer device can establish a correspondence between each filter coefficient representation value and the respective corresponding pixel difference value. From these correspondences, the included filter coefficient representation value is determined based on the same reference motion intensity information.
  • the filter coefficient description information is composed, and the filter coefficient description information is used as the filter coefficient description information corresponding to the reference motion intensity information, thereby obtaining the filter coefficient description information corresponding to each reference motion intensity information.
  • the computer device can further establish a correspondence between the reference motion intensity information and the corresponding filter coefficient description information. Based on this correspondence, when it is necessary to perform temporal filtering on the image to be processed, the data to be processed can be obtained.
  • the computer device can determine the target motion intensity information that matches the motion intensity from multiple reference motion intensity information, and there will be a corresponding relationship with the target motion intensity information.
  • the filter coefficient description information is determined as the filter coefficient description information that matches the motion intensity, so that the time domain filtering of the image to be processed can be determined based on the filter coefficient description information.
  • the filter coefficient description information describes the correspondence between the pixel difference value and the filter coefficient representation value under each reference motion intensity information, when performing time domain filtering on the image to be processed, it can be directly
  • the pixel difference value corresponding to the pixel point in the image to be processed is used as an index, and the corresponding filter coefficient representation value is queried from the filter coefficient description information, which avoids obtaining the filter coefficient through complex calculations and improves the filtering efficiency.
  • determining the filter coefficient representation value under multiple pixel difference values within the pixel difference value distribution range based on each reference motion intensity information respectively includes: determining the pixel difference based on each reference motion intensity information respectively.
  • the above filtered data processing method further includes: obtaining the image to be processed and the reference image; according to Determine the motion intensity of the block to be processed in the image to be processed with reference to the image; the block to be processed is obtained by dividing the image to be processed; obtain filter coefficient description information that matches the motion intensity, and the filter coefficient description information is used to describe the pixel difference and filter coefficient
  • the corresponding relationship between the representation values among them, the representation value of the filter coefficient is negatively correlated with the pixel difference value, and the change degree of the representation value of the filter coefficient is positively correlated with the intensity of motion; obtain the pixel points in the block to be processed and the corresponding pixels in the reference image
  • the target filter coefficient representation value corresponding to the target pixel difference is determined based on the filter coefficient description information; based on the target filter coefficient representation value, the target noise reduction image corresponding to the image to be processed is determined.
  • the present application also provides an application scenario, in which the image noise reduction method of the present application is applied to a video conferencing application to perform noise reduction processing on video frames in the video conference.
  • the computer device can use the sequence of video frames captured during the meeting as images to be processed starting from the second frame, and for each image to be processed, perform the following steps:
  • the height and width of the image to be processed are H and W respectively, and the image to be processed can be divided into m rows and n columns, then the height and width of each block are H/m and W/n respectively.
  • step 3 Respect each block to be processed in the image to be processed as the current block to be processed.
  • For the current block Y to be processed count the average brightness value in the block Y to be processed, and determine whether the average brightness value exceeds the preset brightness threshold. When it exceeds , means that the brightness of the block to be processed is very high and the human eye cannot detect the noise, so the block to be processed does not need to be processed and proceeds directly to step 9. Otherwise, subsequent time domain noise reduction needs to be performed and enter step 4.
  • VN is used to represent the change amplitude of noise pixels
  • VS is used to represent the change amplitude of motion pixels.
  • the matrix sizes of VN and VS are consistent with the current block size Y to be processed.
  • the filter coefficient description information is obtained in advance through the following steps and stored in a table:
  • the pixel difference distribution range is [0, 255], and the multiple pixel difference values within the pixel difference distribution range are each integer value within [0, 255].
  • the filter coefficient description information can be stored in the form of a table, and the pixel difference value serves as the index value of the table.
  • the pixel difference value serves as the index value of the table.
  • the query obtains the target filter coefficient representation value corresponding to the difference value of each target pixel.
  • the reference image is the intermediate processed image corresponding to the video frame at time t-1 (that is, the previous frame), that is, when the video frame at time t-1 is used as the image to be processed, the image obtained by performing time domain filtering on the image to be processed ,
  • the time domain filtering process uses recursive filtering. After each video frame uses the image denoising method provided in this embodiment to obtain an intermediate processed image output by the time domain filter, the intermediate processed image is saved as the next A reference image for a video frame.
  • the intermediate processing data is the original brightness channel data of the block to be processed.
  • the image formed by the intermediate processing data combined here is the intermediate processing image corresponding to the image to be processed, and the intermediate processing image can be saved as a reference image for the next frame of the image to be processed.
  • this application also provides another application scenario.
  • the image noise reduction method of this application is applied to a video live broadcast application to perform noise reduction processing on video frames during the live broadcast process.
  • the computer device can use the sequence of video frames collected during the live broadcast as images to be processed starting from the second frame. For each image to be processed, execute the image denoising method provided by the embodiment of the present application to obtain the target noise reduction. images, thereby improving the visual quality of the video during the live broadcast.
  • the filter coefficient description information stores the correspondence between the filter coefficient and the pixel difference value.
  • the computer After the device obtains the pixel difference value corresponding to each pixel point, it can query the target filter coefficient from the filter coefficient description information that matches the motion intensity of the block to be processed, and then calculate it through the formula (1) above. The temporal filtering results of each pixel.
  • the image denoising method of the present application divides the image to be processed into blocks, and counts the brightness, motion degree, noise intensity and other information in each block, and then uses different intensity time domains for different blocks.
  • filter Wave filter to achieve regional motion adaptive noise reduction.
  • the image is then subjected to guided filtering in the spatial domain to further eliminate residual noise and retain edge information to avoid blurring. This can effectively solve motion smear and image blur after noise reduction with a low amount of calculation. problem, achieving excellent noise reduction effect.
  • the following examples illustrate the efficiency reduction effect of the application with examples.
  • Figure 9 it is a schematic diagram of the effect of the image noise reduction method provided by the embodiment of the present application.
  • Figure 9(a) is a schematic diagram of the image before noise reduction
  • Figure 9(a) is a schematic diagram of the image before noise reduction.
  • the picture shows the image after noise reduction.
  • some areas (the area within the box in the original picture) are enlarged in Figure 9.
  • the image before noise reduction is There are many noise particles in the image. After noise reduction processing using the image noise reduction method provided in the embodiment of the present application, the particles are reduced and the image becomes clear and smooth.
  • the image noise reduction method provided by the embodiment of the present application can achieve good noise reduction effects on both the background area and the foreground area.
  • Figure 10 it is a schematic diagram of the effect of the image noise reduction method provided by the embodiment of the present application.
  • (a) in Figure 10 is a schematic diagram of the image before noise reduction
  • (a) in Figure 10 is a schematic diagram of the image before noise reduction.
  • b) The picture shows a schematic diagram of the image after noise reduction. It can be seen from Figure 10 that there are frequently jittering noise points in the image before noise reduction, causing discomfort. After noise reduction processing through the image noise reduction method provided by the embodiment of the present application, These jitter points will disappear.
  • embodiments of the present application also provide an image noise reduction device for implementing the above-mentioned image noise reduction method and a filtering data processing device for implementing the above-mentioned filtering data processing method.
  • the solution to the problem provided by this device is similar to the solution recorded in the above method. Therefore, the specific limitations in the embodiments of one or more image noise reduction devices and filtering data processing devices provided below can be found in each of the above. Limitations in Method Examples.
  • an image noise reduction device 1100 including:
  • Image acquisition module 1102 used to acquire the image to be processed and the reference image
  • the motion intensity determination module 1104 is used to determine the motion intensity of the block to be processed in the image to be processed based on the reference image; the block to be processed is obtained by dividing the image to be processed;
  • the description information acquisition module 1106 is used to obtain filter coefficient description information that matches the motion intensity.
  • the filter coefficient description information is used to describe the correspondence between the pixel difference value and the filter coefficient representation value; wherein, the filter coefficient representation value and the pixel difference value There is a negative correlation, and the degree of change in the representation value of the filter coefficient is positively correlated with the exercise intensity;
  • the filter coefficient determination module 1108 is used to obtain the target pixel difference between the pixel point in the block to be processed and the pixel point at the corresponding position in the reference image, and determine the target filter coefficient corresponding to the target pixel difference value based on the filter coefficient description information. representational value;
  • the noise reduction image determination module 1110 is used to determine the target noise reduction image corresponding to the image to be processed based on the target filter coefficient representation value.
  • the blocks to be processed are obtained by dividing the image to be processed, different target filter coefficient representation values can be determined for different blocks to be processed in the image to be processed, and the obtained target filter coefficient representation values can be The motion of each area in the image is accurately matched, which avoids the problem of poor noise reduction effect in some areas caused by estimating the motion intensity of the entire image, and improves the noise reduction effect of the image to be processed.
  • the target filter coefficient representation value is The filter coefficient description information is determined.
  • the filter coefficient description information is used to describe the correspondence between the pixel difference value and the filter coefficient representation value, and the filter coefficient representation value is negatively correlated with the pixel difference value.
  • the degree of change of the filter coefficient representation value It is positively related to the motion intensity, so for each pixel value, the target filter coefficient representation value matching the pixel can be obtained, further improving the noise reduction effect of the image to be processed.
  • the above image processing device further includes: a description information determination module, used to determine multiple reference motion intensity information and determine the pixel difference distribution range; based on each reference motion intensity information, determine the pixel difference value The representation value of the filter coefficient under multiple pixel difference values within the distribution range; establishing the corresponding relationship between each filter coefficient representation value and the corresponding pixel difference value; placing the representation value of the filter coefficient determined based on the same reference motion intensity information.
  • the corresponding relationship constitutes the filter coefficient description information, and obtains the filter coefficient description information corresponding to each reference exercise intensity information; the description information acquisition module 1106 is also used to determine the matching exercise intensity from multiple reference exercise intensity information.
  • the target motion intensity information is determined, and the filter coefficient description information corresponding to the target motion intensity information is determined as the filter coefficient description information matching the motion intensity.
  • the description information determination module is also used to determine target filter coefficients under multiple pixel differences within the pixel difference distribution range based on each reference motion intensity information; and compare each target filter coefficient with The respective corresponding pixel difference values are multiplied to obtain the filter coefficient representation value under each pixel difference value; the noise reduction image determination module is also used to compare the pixel value based on the pixel point and the pixel value of the corresponding pixel point in the reference image.
  • the size relationship determines the time domain filtering result of the pixel, and determines the target noise reduction image corresponding to the image to be processed based on the time domain filtering result of the pixel.
  • the noise reduction image determination module is also used to subtract the target filter coefficient representation value from the pixel value of the pixel when the pixel value of the pixel is greater than or equal to the pixel value of the corresponding position in the reference image, Obtain the time domain filtering result of the pixel; when the pixel value of the pixel is less than or equal to the pixel value of the corresponding pixel in the reference image, add the pixel value of the pixel to the target filter coefficient representation value to obtain the time domain of the pixel Filter results.
  • the description information determination module is also used to divide the exercise intensity distribution range into multiple exercise intensity intervals, and use each exercise intensity interval as a reference exercise intensity information; the description information acquisition module 1106 is also used to obtain from Determine a target interval to which the exercise intensity belongs among multiple exercise intensity intervals, and determine the filter coefficient description information corresponding to the target interval as filter coefficient description information matching the exercise intensity.
  • the above device also includes a format conversion module for converting the three primary color channel data in the image to be processed into brightness channel data, chroma channel data and density channel data and extracting the brightness channel data therein, converting the reference image into The three primary color channel data in the module is converted into brightness channel data, chroma channel data and concentration channel data and the brightness channel data is extracted; the motion intensity determination module is also used to determine the brightness channel data of the reference image and the brightness channel data of the block to be processed.
  • a format conversion module for converting the three primary color channel data in the image to be processed into brightness channel data, chroma channel data and density channel data and extracting the brightness channel data therein, converting the reference image into The three primary color channel data in the module is converted into brightness channel data, chroma channel data and concentration channel data and the brightness channel data is extracted; the motion intensity determination module is also used to determine the brightness channel data of the reference image and the brightness channel data of the block to be processed.
  • the filter coefficient determination module is also used to obtain the pixel difference between the pixels in the block to be processed and the corresponding pixels in the reference image under the brightness channel data, and obtain the target pixel difference; reduce The noisy image determination module is also used to determine the target noise reduction image obtained by denoising the brightness channel data of the image to be processed based on the target filter coefficient representation value.
  • the noise reduction image determination module is also used to determine the intermediate processing data obtained by performing time domain filtering on the brightness channel data in the image to be processed based on the target filter coefficient representation value; perform spatial domain noise reduction based on the intermediate processing data, Obtain the target brightness data corresponding to the image to be processed; combine the target brightness data with the chroma channel data and concentration channel data of the image to be processed and convert them into three primary color channel data to obtain the target noise reduction image.
  • the above device further includes: a brightness characterization value determination module, configured to determine the brightness characterization value of the block to be processed based on the brightness channel data of the block to be processed; when the brightness characterization value is less than or equal to the preset brightness threshold , enter to determine the motion intensity of the block to be processed based on the brightness channel data of the reference image and the brightness channel data of the block to be processed; when the brightness representation value is greater than the preset brightness threshold, the brightness channel data of the block to be processed is used as intermediate processing data, and enter the step of performing spatial noise reduction based on the intermediate processing data to obtain the target brightness data corresponding to the image to be processed.
  • a brightness characterization value determination module configured to determine the brightness characterization value of the block to be processed based on the brightness channel data of the block to be processed based on the brightness channel data of the block to be processed.
  • the motion intensity determination module is also used to determine the degree of difference of the block to be processed relative to the reference image, and the noise intensity of the block to be processed relative to the reference image; based on the degree of difference and noise intensity, determine the degree of difference of the block to be processed relative to the reference image. Movement intensity; movement intensity is positively correlated with degree of difference, and movement intensity is negatively correlated with noise intensity.
  • the motion intensity determination module is also used to obtain the pixel difference value of each pixel point in the block to be processed relative to the pixel point at the corresponding position in the reference image; based on the pixel difference value corresponding to each pixel point, determine the pixel difference value to be processed.
  • Noise pixels and motion pixels in the processing block among them, motion pixels are pixels whose pixel difference value is greater than the preset difference threshold, and noise pixels are pixels whose pixel difference value is less than or equal to the preset difference threshold; statistics of each The pixel difference value of the noise pixel point is used to obtain the noise intensity, and the pixel difference value of each moving pixel point is counted to obtain the degree of difference.
  • the noise reduction image determination module is also used to determine the intermediate processed image obtained by performing time domain filtering on the image to be processed based on the target filter coefficient representation value, and use the intermediate processed image as the input image and the guide image respectively;
  • the image is downsampled to obtain the first sampled image
  • the guide image is downsampled to obtain the second sampled image
  • the first sampled image is guided and filtered based on the second sampled image to obtain the target image
  • the target image is processed according to the size of the input image. Upsample to obtain a target denoised image with the same size as the input image.
  • the image acquisition module is also used to determine the target video to be denoised; use the video frame in the target video as the image to be processed, and determine the target video frame from the forward video frame corresponding to the image to be processed; obtain The target noise reduction image corresponding to the target video frame is determined as the reference image corresponding to the image to be processed.
  • a filtering data processing device 1200 including:
  • the reference motion intensity determination module 1202 is used to determine multiple reference motion intensity information and determine the pixel difference distribution range
  • the representation value determination module 1204 is used to determine the representation value of the filter coefficient under multiple pixel difference values within the pixel difference distribution range based on each reference motion intensity information;
  • the correspondence relationship establishment module 1206 is used to establish a correspondence relationship between each filter coefficient representation value and its corresponding pixel difference value
  • the description information determination module 1208 is used to combine the corresponding relationships of the filter coefficient representation values determined based on the same reference motion intensity information into filter coefficient description information, and obtain the filter coefficient description information corresponding to each reference motion intensity information; wherein, the filter coefficient description The information is used for temporal filtering of the image to be processed.
  • the filter coefficient description information describes the correspondence between the pixel difference value and the filter coefficient representation value under each reference motion intensity information
  • the characterization value determination module is used to determine target filter coefficients under multiple pixel difference values within the pixel difference distribution range based on each reference motion intensity information; compare each target filter coefficient with its corresponding The pixel difference values are multiplied together to obtain the representative value of the filter coefficient under each pixel difference value.
  • Each module in the above-mentioned image noise reduction device and filtering data processing device can be implemented in whole or in part by software, hardware, and combinations thereof.
  • Each of the above modules may be embedded in or independent of the processor of the computer device in the form of hardware, or may be stored in the memory of the computer device in the form of software, so that the processor can call and execute the operations corresponding to the above modules.
  • a computer device is provided.
  • the computer device may be a server, and its internal structure diagram may be as shown in Figure 13.
  • the computer device includes a processor, a memory, an input/output interface (Input/Output, referred to as I/O), and a communication interface.
  • the processor, memory and input/output interface are connected through the system bus, and the communication interface is connected to the system bus through the input/output interface.
  • the processor of the computer device is used to provide computing and control capabilities.
  • the memory of the computer device includes non-volatile storage media and internal memory.
  • the non-volatile storage medium stores operating systems, computer programs and databases. This internal memory provides an environment for the execution of operating systems and computer programs in non-volatile storage media.
  • the database of the computer device is used to store image data, filter coefficient description information and other data.
  • the input/output interface of the computer device is used to exchange information between the processor and external devices.
  • the communication interface of the computer device is used to communicate with an external terminal through a network connection.
  • the computer program implements an image noise reduction method or a filtering data processing method when executed by the processor.
  • a computer device is provided.
  • the computer device may be a terminal, and its internal structure diagram may be as shown in Figure 14.
  • the computer device includes a processor, memory, input/output interface, communication interface, display unit and input device.
  • the processor, memory and input/output interface are connected through the system bus, and the communication interface, display unit and input device are connected to the system bus through the input/output interface.
  • the processor of the computer device is used to provide computing and control capabilities.
  • the memory of the computer device includes non-volatile storage media and internal memory.
  • the non-volatile storage medium stores operating systems and computer programs. This internal memory provides an environment for the execution of operating systems and computer programs in non-volatile storage media.
  • the input/output interface of the computer device is used to exchange information between the processor and external devices.
  • the communication interface of the computer device is used for wired or wireless communication with external terminals.
  • the wireless mode can be implemented through WIFI, mobile cellular network, NFC (Near Field Communication) or other technologies.
  • the computer program implements an image noise reduction method or a filtering data processing method when executed by the processor.
  • the computer device s display
  • the display unit is used to form a visually visible picture, which can be a display screen, a projection device or a virtual reality imaging device.
  • the display screen can be a liquid crystal display screen or an electronic ink display screen.
  • the input device of the computer device can be a touch screen covered on the display screen.
  • the layer may also be buttons, trackballs or touch pads provided on the computer device casing, or it may be an external keyboard, touch pad or mouse, etc.
  • FIGS 13 and 14 are only block diagrams of partial structures related to the solution of the present application, and do not constitute a limitation on the computer equipment to which the solution of the present application is applied.
  • Computer equipment may include more or fewer components than shown in the figures, or some combinations of components, or have different arrangements of components.
  • a computer device including a memory and a processor.
  • a computer program is stored in the memory.
  • the processor executes the computer program, it implements the steps of the above image noise reduction method or filtering data processing method.
  • a computer-readable storage medium on which a computer program is stored.
  • the steps of the above image noise reduction method or filtering data processing method are implemented.
  • a computer program product including a computer program that implements the steps of the above image noise reduction method or filtering data processing method when executed by a processor.
  • the user information including but not limited to user equipment information, user personal information, etc.
  • data including but not limited to data used for analysis, stored data, displayed data, etc.
  • the computer program can be stored in a non-volatile computer-readable storage.
  • the computer program when executed, may include the processes of the above method embodiments.
  • Any reference to memory, database or other media used in the embodiments provided in this application may include at least one of non-volatile and volatile memory.
  • Non-volatile memory can include read-only memory (ROM), magnetic tape, floppy disk, flash memory, optical memory, high-density embedded non-volatile memory, resistive memory (ReRAM), magnetic variable memory (Magnetoresistive Random Access Memory (MRAM), ferroelectric memory (Ferroelectric Random Access Memory, FRAM), phase change memory (Phase Change Memory, PCM), graphene memory, etc.
  • Volatile memory may include random access memory (Random Access Memory, RAM) or external cache memory, etc.
  • RAM Random Access Memory
  • RAM random access memory
  • RAM Random Access Memory
  • the databases involved in the various embodiments provided in this application may include at least one of a relational database and a non-relational database.
  • Non-relational databases may include blockchain-based distributed databases, etc., but are not limited thereto.
  • the processors involved in the various embodiments provided in this application may be general-purpose processors, central processing units, graphics processors, digital signal processors, programmable logic devices, quantum computing-based data processing logic devices, etc., and are not limited to this.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Quality & Reliability (AREA)
  • Multimedia (AREA)
  • Picture Signal Circuits (AREA)
  • Image Processing (AREA)

Abstract

本申请涉及图像降噪、滤波数据处理方法、装置和计算机设备,包括:获取待处理图像和参考图像(202);根据参考图像确定待处理图像中待处理块的运动强度,待处理块是对待处理图像进行划分得到的(204);获取与运动强度匹配的滤波系数描述信息,滤波系数描述信息用于描述像素差值与滤波系数表征值之间的对应关系(206);获取待处理块中的像素点和参考图像中对应位置像素点之间的目标像素差值,基于滤波系数描述信息确定与目标像素差值存在对应关系的目标滤波系数表征值(208);基于目标滤波系数表征值,确定待处理图像对应的目标降噪图像(210)。

Description

图像降噪、滤波数据处理方法、装置和计算机设备
相关申请
本申请要求2022年05月27日申请的,申请号为2022105900364,名称为“图像降噪、滤波数据处理方法、装置和计算机设备”的中国专利申请的优先权,在此将其全文引入作为参考。
技术领域
本申请涉及图像处理技术领域,特别是涉及一种图像降噪方法、装置、计算机设备、存储介质和计算机程序产品,以及一种滤波数据处理方法、装置、计算机设备、存储介质和计算机程序产品。
背景技术
随着图像处理技术的发展,出现了图像降噪技术,通过图像降噪技术可以减弱因硬件误差和不稳定光源带来的图像噪声信息。在很多场景下,都需要进行图像降噪,例如在视频采集中,会有各种各样的噪声会混杂在采集的图像序列中,为了提高视频图像的视觉效果,提高视频图像的压缩效率或节省宽带,都需要对视频图像进行降噪。
相关技术中,在对图像进行降噪时,经常存在降噪效果差的问题。
发明内容
根据本申请的各种实施例,提供一种图像降噪方法、装置、计算机设备、计算机可读存储介质和计算机程序产品,以及一种滤波数据处理方法、装置、计算机设备、计算机可读存储介质和计算机程序产品。
一方面,本申请提供了一种图像降噪方法,由计算机设备执行,包括:获取待处理图像和参考图像;根据所述参考图像确定所述待处理图像中待处理块的运动强度;所述待处理块是对所述待处理图像进行划分得到的;获取与所述运动强度匹配的滤波系数描述信息,所述滤波系数描述信息用于描述像素差值与滤波系数表征值之间的对应关系;其中,所述滤波系数表征值与所述像素差值成负相关,所述滤波系数表征值的变化程度与所述运动强度成正相关;获取所述待处理块中的像素点和所述参考图像中对应位置像素点之间的目标像素差值,基于所述滤波系数描述信息确定与所述目标像素差值存在对应关系的目标滤波系数表征值;及基于所述目标滤波系数表征值,确定所述待处理图像对应的目标降噪图像。
另一方面,本申请还提供了一种图像降噪装置。所述装置包括:图像获取模块,用于获取待处理图像和参考图像;运动强度确定模块,用于根据所述参考图像确定所述待处理图像中待处理块的运动强度;所述待处理块是对所述待处理图像进行划分得到的;描述信息获取模块,用于获取与所述运动强度匹配的滤波系数描述信息,所述滤波系数描述信息用于描述像素差值与滤波系数表征值之间的对应关系;其中,所述滤波系数表征值与所述像素差值成负相关,所述滤波系数表征值的变化程度与所述运动强度成正相关;滤波系数确定模块,用于获取所述待处理块中的像素点和所述参考图像中对应位置像素点之间的目标像素差值,基于所述滤波系数描述信息确定与所述目标像素差值存在对应关系的目标滤波系数表征值;及降噪图像确定模块,用于基于所述目标滤波系数表征值,确定所述待处理图像对应的目标降噪图像。
另一方面,本申请还提供了一种计算机设备。所述计算机设备包括存储器和处理器,所述存储器存储有计算机程序,所述处理器执行所述计算机程序时实现上述图像降噪方法的步骤。
另一方面,本申请还提供了一种计算机可读存储介质。所述计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现上述图像降噪方法的步骤。
另一方面,本申请还提供了一种计算机程序产品。所述计算机程序产品,包括计算机 程序,该计算机程序被处理器执行时实现上述图像降噪方法的步骤。
另一方面,本申请提供了一种滤波数据处理方法,由计算机设备执行,包括:确定多个参考运动强度信息,并确定像素差值分布范围;分别基于每一个参考运动强度信息,确定在像素差值分布范围内的多个像素差值下的滤波系数表征值;建立各个滤波系数表征值和各自对应的像素差值之间的对应关系;及将基于同一参考运动强度信息确定的滤波系数表征值所在的对应关系组成滤波系数描述信息,得到各个参考运动强度信息各自对应的滤波系数描述信息;其中,所述滤波系数描述信息用于对待处理图像进行时域滤波。
另一方面,本申请还提供了一种滤波数据处理装置。所述装置包括:参考运动强度确定模块,用于确定多个参考运动强度信息,并确定像素差值分布范围;表征值确定模块,分别基于每一个参考运动强度信息,确定在像素差值分布范围内的多个像素差值下的滤波系数表征值;对应关系建立模块,用于建立各个滤波系数表征值和各自对应的像素差值之间的对应关系;及描述信息确定模块,用于将基于同一参考运动强度信息确定的滤波系数表征值所在的对应关系组成滤波系数描述信息,得到各个参考运动强度信息各自对应的滤波系数描述信息;其中,所述滤波系数描述信息用于对待处理图像进行时域滤波。
另一方面,本申请还提供了一种计算机设备。所述计算机设备包括存储器和处理器,所述存储器存储有计算机程序,所述处理器执行所述计算机程序时实现上述滤波数据处理方法的步骤。
另一方面,本申请还提供了一种计算机可读存储介质。所述计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现上述滤波数据处理方法的步骤。
另一方面,本申请还提供了一种计算机程序产品。所述计算机程序产品,包括计算机程序,该计算机程序被处理器执行时实现上述滤波数据处理方法的步骤。
本申请的一个或多个实施例的细节在下面的附图和描述中提出。本申请的其它特征、目的和优点将从说明书、附图以及权利要求书变得明显。
附图说明
为了更清楚地说明本申请实施例或传统技术中的技术方案,下面将对实施例或传统技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据公开的附图获得其他的附图。
图1为一个实施例中图像降噪方法以及滤波数据处理方法的应用环境图;
图2为一个实施例中图像降噪方法的流程示意图;
图3为一个实施例中图像降噪算法应用于视频会议应用程序的界面示意图;
图4为一个实施例中对待处理图像进行划分的示意图;
图5为一个实施例中滤波函数的示意图;
图6为另一个实施例中图像降噪方法的具体流程示意图;
图7为一个实施例中图像降噪方法的应用架构图;
图8为一个实施例中滤波数据处理方法的流程示意图;
图9为一个实施例中图像降噪方法的效果示意图;
图10为另一个实施例中图像降噪方法的效果示意图;
图11为一个实施例中图像降噪装置的结构框图;
图12为一个实施例中滤波数据处理装置的结构框图;
图13为一个实施例中计算机设备的内部结构图;
图14为一个实施例中计算机设备的内部结构图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地 描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
本申请实施例提供的图像降噪方法和滤波数据处理方法,可以应用于如图1所示的应用环境中。其中,终端102通过网络与服务器104进行通信。数据存储系统可以存储服务器104需要处理的数据,例如可以存储降噪得到的目标降噪图像。数据存储系统可以集成在服务器104上,也可以放在云上或其他服务器上。其中,服务器可以是独立的物理服务器,也可以是多个物理服务器构成的服务器集群或者分布式系统,还可以是提供云服务、云数据库、云计算、云函数、云存储、网络服务、云通信、中间件服务、域名服务、安全服务、CDN、以及大数据和人工智能平台等基础云计算服务的云服务器。终端可以是智能手机、平板电脑、笔记本电脑、台式计算机、智能语音交互设备、智能家电、车载终端、飞行器等,但并不局限于此。终端以及服务器可以通过有线或无线通信方式进行直接或间接地连接,本申请在此不做限制。
可以理解的是,本申请实施例提供的图像降噪方法以及滤波数据处理方法,可以由终端102执行,也可以由服务器104执行,还可以由终端102和服务器104协同执行。例如,终端102可以采集得到待处理图像,将待处理图像发送至服务器,服务器进一步获取待处理图像对应的参考图像,然后根据参考图像确定待处理图像中待处理块的运动强度,获取与运动强度匹配的滤波系数描述信息,滤波系数描述信息用于描述像素差值与滤波系数表征值之间的对应关系,获取待处理块中的像素点和参考图像中对应位置像素点之间的目标像素差值,基于滤波系数描述信息确定与目标像素差值存在对应关系的目标滤波系数表征值,基于目标滤波系数表征值,确定待处理图像对应的目标降噪图像,最后,服务器104可以将目标降噪图像返回至终端102。
在一个实施例中,如图2所示,提供了一种图像降噪方法,由计算机设备执行,该计算机设备可以是图1中的终端102,也可以是图1中的服务器104,还可以是终端102和服务器104所构成的系统。具体地,该图像降噪方法包括以下步骤:
步骤202,获取待处理图像和参考图像。
其中,待处理图像指的是需要进行降噪的图像。待处理图像可以是计算机设备采集的图像,或者是计算机设备通过网络接收的、由其他计算机设备采集的图像。待处理图像可以是视频中的视频帧。参考图像指的是用于作为参考对待处理图像进行时域滤波的图像。参考图像与待处理图像中可以包括相同的前景对象。参考图像可以是通过降噪处理得到的图像,其中,当待处理图像为待降噪的目标视频中的图像时,参考图像可以是对待处理图像的前向视频帧进行降噪处理得到的图像。这里的降噪处理,可以是时域滤波处理,或者空域滤波处理。在其他实施例中,降噪处理还可以是先进行时域滤波,再进行空域滤波处理。
具体地,计算机设备在确定了待处理图像后,可以获取待处理图像对应的一个或者多个参考图像,进而可以基于参考图像对待处理图像进行滤波处理,从而实现对待处理图像进行降噪。需要说明的是,本申请实施例中提到的多个指的是至少两个。
在一个实施例中,获取待处理图像和参考图像,包括:确定待降噪的目标视频;将目标视频中的视频帧作为待处理图像,从待处理图像对应的前向视频帧中确定目标视频帧;获取目标视频帧对应的目标降噪图像,将目标视频帧对应的目标降噪图像确定为待处理图像对应的参考图像。
其中,待降噪的目标视频指的是需要进行降噪处理的视频。待处理图像对应的前向视频帧指的是在目标视频中时间点在待处理图像之前的视频帧,例如,待处理图像是目标视频中第10秒的视频帧,则在该目标视频中第10秒之前的视频帧为该待处理图像的前向视频帧。目标视频帧对应的目标降噪图像指的是,将该目标视频帧作为待处理图像时,所获得的目标降噪图像,本申请实施例中,可以采用递归滤波的方式进行降噪,每一帧视频帧 采用本申请的图像降噪方法得到目标降噪图像后可以进行保存,以作为该视频帧的后向视频帧的参考图像,可以理解的是,某一个视频帧的后向视频帧即视频帧对应的时间点在该视频帧之后的视频帧。
具体地,计算机设备可以将目标视频中第一帧以后的各个视频帧依次确定为待处理图像,对于每一个待处理图像,计算机设备可以从该待处理图像对应的前向视频帧中确定一个或者多个目标视频帧,获取目标视频帧的对应的目标降噪图像,将目标降噪图像作为该待处理图像对应的参考图像。
在一个具体的实施例中,计算机设备上可以安装视频类应用程序,该视频应用程序可以用于拍摄得到视频或者播放、编辑视频,视频类应用程序例如可以是直播应用程序、视频剪辑应用程序、视频监测应用程序、视频会议应用程序等等。当计算机设备通过该视频类应用程序进行视频播放时,对于所播放视频中的各帧视频帧,计算机设备可以将该这些视频帧依次作为待处理图像,并获取该待处理图像的参考图像,从而可以通过本申请提供的图像降噪方法获取待处理图像的目标降噪图像,以提升视频视觉质量,得到更好的用户体验。这里的应用程序可以是指安装在终端中的客户端,客户端(又可称为应用客户端、APP客户端)是指安装并运行在终端中的程序;应用程序也可以是指免安装的应用程序,即无需下载安装即可使用的应用程序,这类应用程序又俗称小程序,它通常作为子程序运行于客户端中;应用程序还可以是指通过浏览器打开的web应用程序;等等。
举例说明,如图3所示,为本申请的图像降噪算法应用于视频会议应用程序的界面示意图。参考图3,在视频会议的过程中,当用户在视频会议应用程序的界面中勾选“视频降噪”这一选项时,视频会议应用程序可采用本申请提供的图像降噪方法对会议场景下生成的会议视频中各帧视频帧进行降噪处理。
在一个实施例中,考虑到间隔时间近的视频帧之间的关联性大于间隔时间远的视频帧之间的关联性,因此将间隔时间近的视频帧作为目标视频帧来得到参考图像可以提高降噪效果,那么计算机设备在选择目标视频帧时,可以将待处理图像的前向视频帧中与待处理图像相邻的一个或者预设数量个视频帧作为目标视频帧,这里的预设数量小于预设阈值,预设数量例如可以是2,举个例子,假设待处理图像为目标视频帧中的第5帧视频帧,则可以将第三帧视频和第四帧视频帧作为目标视频帧,分别获取第三帧视频和第四帧视频帧的目标降噪图像,作为第5帧视频帧的参考图像。
步骤204,根据参考图像确定待处理图像中待处理块的运动强度;待处理块是对待处理图像进行划分得到的。
其中,待处理块指的是对待处理图像进行划分得到的图像块。对待处理图像进行划分可以得到多个图像块,每一个图像块都可以作为待处理块。运动强度可以表征待处理块相对于参考图像的运动量大小,运动强度与待处理块相对于参考图像的噪声强度成负相关关系,并且与待处理块相对于参考图像的差异度成正相关关系。
具体地,考虑到噪声信息在时域上呈现高斯分布,比如视频中当前帧某一位置出现的噪声在前向帧的同一位置上可能并不存在或者强度值较弱,因此可以使用加权的方式进行图像融合,比如融合后的当前帧=0.2*前两帧+0.3*前一帧+0.5*当前帧,这样虽然能够减少噪声,但在移动场景下,简单融合会出现明显的拖影,因此本申请实施例中采用了运动自适应的时域滤波方式,当检测到当前帧的运动量较大时,则增大当前帧的权重,削弱之前一帧的比例,比如运动量较大时,融合后的当前帧=0.05*前两帧+0.05*前一帧+0.9*当前帧,这样就能缓解拖影问题。进一步地,考虑到视频帧中不同区域的运动强度往往会有较大差异,举例来说,某一时刻的画面中,只有人物在移动,而背景静止,此时如果对整帧图像计算运动强度,就会选择较弱的时域滤波强度,导致背景区域的降噪效果不佳。
基于此,本实施例中,可以对待处理图像进行划分,得到多个图像块,将每一个图像块作为待处理块,根据参考图像来估计每一个待处理块的运动强度,从而在不同图像块内按照估计的运动级别,使用不同的时域滤波器,这样在人物移动时,固定的背景上仍然能 有良好的降噪效果。
在一个实施例中,计算机设备在对待处理图像进行划分时,可以采用均匀划分方式,将待处理图像进行均匀划分得到多个待处理块。具体来说,若待处理图像的高和宽分别为H和W,设定将整个图像划分为m行n列,则每个块的高和宽分别为H/m,W/n。举例说明,如图4所示,为一个具体的实施例中,对待处理图像进行划分的示意图,参考图4,其中每一个小格为一个待处理块。可以理解的是,在其他实施例中,计算机设备还可以对待处理图像进行非均匀划分或者重叠划分。本申请不对具体的划分方式进行限制。
在一个实施例中,当参考图像有多个时,对于待处理图像中每一个待处理块,服务器分别基于各个参考图像进行运动强度计算,从而对于每一个待处理块,可以得到多个运动强度。
步骤206,获取与运动强度匹配的滤波系数描述信息,滤波系数描述信息用于描述像素差值与滤波系数表征值之间的对应关系;其中,滤波系数表征值与像素差值成负相关,滤波系数表征值的变化程度与运动强度成正相关。
其中,滤波系数表征值指的是对滤波器系数进行表征的数值,滤波系数表征值具体可以是滤波器系数值或者对滤波器系数值进行一定的计算得到的数值。
本申请实施例中进行降噪的过程中可以依据以下公式对待处理块进行时域滤波,其中,i和j为待处理图像中的像素位置索引,Y代表待处理块中的像素值,Yout(t)代表当前时刻的时域滤波结果,Yout(t-1)代表参考图像中的像素值,此处参考图像采用t-1时刻的时域滤波结果,k代表滤波器系数:
Yout(t)(i,j)=Yout(t-1)(i,j)*k+Y(i,j)*(1-k)   (1)
上式表示此刻的时域滤波结果,等于此刻的输入值与上一刻的时域滤波结果的加权融合,加权系数k的取值范围为0到1,且有如下关系:当前输入Y(i,j)与上一刻的时域滤波结果Yout(t-1)(i,j)之间的像素差值越大时(这里的像素差值代表两者差的绝对值),k越小,反之,k越大,因此滤波器系数k是与,Yout(t-1)(i,j)与Y(i,j)之间差值呈负相关关系的,基于此,本申请实施例中可以建立像素差值与滤波系数表征值之间的对应关系并保存为滤波系数描述信息,进而基于该滤波系数描述信息来确定滤波系数表征值,通过该滤波系数表征值确定对待处理图像进行时域滤波得到的时域滤波结果。
进一步,由于本申请采用的是运动自适应的滤波方式,在不同的运动强度下,期望得到不同的滤波效果,运动强度越大,滤波效果应该越弱,运动强度越小,滤波效果应该适当增强,因此本申请中可以预先确定多个滤波系数描述信息,不同的滤波系数描述信息中滤波系数表征值的变化程度不相同,并且设定滤波系数表征值的变化程度与运动强度成正相关关系,其中滤波系数表征值的变化程度指的是滤波系数表征值随像素差值变化时的变化程度,即运动强度越大,滤波系数表征值的变化程度越大,此时,滤波系数表征值随像素差值的增大,快速减小;反之,运动强度越小,滤波系数表征值的变化程度越小,此时,滤波系数表征值随像素差值的增大,减弱程度可以变缓。
在一个实施例中,滤波系数描述信息可以是滤波函数,该滤波函数中滤波器系数值为因变量,像素差值为自变量,例如,滤波函数可以是滤波器系数关于像素差值的递减指数函数。计算机设备可以将运动强度划分为多个区间,在每个区间设置相应的滤波函数,运动越强,滤波函数的曲线越陡峭,这样就能够避免在运动区域出现拖影,同时运动弱的图像块,采用较为平缓的滤波曲线,这样就能够在相对固定的背景上取得更好的降噪效果。 在一个具体的实施例中,滤波函数可以是指数函数。举例说明,参考图5,为一个实施例中,滤波函数的示意图,图5所示的实施例中,将运动强度划分两个区间,将小于预设运动强度阈值的划分为小运动强度区间,大于运动强度的划分为大运动强度区间,针对小运动强度区间,设计的滤波函数如图5中的虚线所示,针对大运动强度区间,设计的滤波函数如图5中的实线所示,由图5可以看出,大运动强度所对应的函数曲线相较于小运动强度对应的函数曲线更加陡峭。
具体地,计算机设备中预先保存多个滤波系数描述信息,每一个滤波系数描述信息对应一个参考运动强度信息,在进行滤波时,计算机设备可以从多个参考运动强度信息中确定与待处理块的运动强度匹配的参考运动强度信息,进而获取该参考运动强度信息对应的滤波系数描述信息作为与该运动强度匹配的滤波系数描述信息。
在一个实施例中,当参考图像有多个时,对于每一个待处理块可以得到多个运动强度,此时,对于每一个待处理块对应的每一个运动强度,计算机设备都需要获取到匹配的滤波系数描述信息。
步骤208,获取待处理块中的像素点和参考图像中对应位置像素点之间的目标像素差值,基于滤波系数描述信息确定与目标像素差值存在对应关系的目标滤波系数表征值。
具体地,对于待处理块中的每一个像素点,计算机设备可以计算该像素点与参考图像中对应位置像素点之间的差值,将该差值的绝对值作为目标像素差值,进而基于滤波系数描述信息确定与目标像素差值存在对应关系的目标滤波系数表征值。
在一个实施例中,当参考图像有多个时,由于每一个待处理块对应多个滤波系数描述信息,那么对于待处理块中每一个像素点的目标像素差值,计算机设备可以分别基于各个滤波系数描述信息确定与该目标像素差值存在对应关系的目标滤波系数表征值,从而每一个像素点都可以得到多个目标滤波系数表征值。
在一个实施例中,当滤波系数描述信息为滤波函数时,计算机设备可以将目标像素值代入该滤波函数中,计算出的函数值即为该目标像素值对应的目标滤波系数表征值。
步骤210,基于目标滤波系数表征值,确定待处理图像对应的目标降噪图像。
具体地,计算机设备可以基于各个像素点各自对应的目标滤波系数表征值,确定对待处理图像进行时域滤波得到的中间处理图像,进而基于中间处理图像得到待处理图像对应的目标降噪图像。
在一个实施例中,滤波系数表征值为滤波器系数值时,计算机设备可以参考上文中的公式(1),基于待处理块中各个像素点对应的目标滤波系数表征值对各个像素点进行时域滤波,得到各个像素点的时域滤波结果,将各个像素点的时域滤波结果作为待处理图像中的当前像素值对待处理图像进行更新后,得到中间处理图像。
在一个实施例中,在得到中间处理图像后,计算机设备可以直接将该中间处理图像作为待处理图像对应的目标降噪图像。在另一个实施例中,考虑到时域滤波后还会存在一些难以滤除的幅度较大的噪声,因此在时域操作后,计算机设备可以继续使用空域滤波,将空域滤波处理得到的图像作为目标降噪图像。这里的空域滤波是一种平滑化方法,其原理是自然图像的像素值在空间上比较平滑与连续,拍摄所得到的图像是自然图像加上了噪声,运用空域滤波旨在消除不平滑的噪声,得到平滑的自然图像。
在一个具体的实施例中,空域滤波可以采用高斯滤波和双边滤波中的至少一种。在另一个具体的实施例中,计算机设备可以获取已训练的用于空域滤波的深度学习模型,将中间处理图像输入到该深度学习模型中,通过该深度学习模型输出目标降噪图像。其中,用于空域滤波的深度学习模型可以采用有监督的训练方法训练得到,训练过程中的输入样本为未采用空域滤波的原始图像,训练标签为通过空域滤波得到的目标图像。
上述图像降噪方法中,根据参考图像确定待处理图像中待处理块的运动强度,获取与运动强度匹配的滤波系数描述信息,获取待处理块中的像素点和参考图像中对应位置像素点之间的目标像素差值,基于滤波系数描述信息确定与目标像素差值存在对应关系的目标 滤波系数表征值,基于目标滤波系数表征值,确定待处理图像对应的目标降噪图像,由于待处理块是对待处理图像进行划分得到的,对于待处理图像中的不同的待处理块可以确定不同的目标滤波系数表征值,得到的目标滤波系数表征值可以和图像中各个区域的运动情况精确匹配,避免了对整张图像进行运动强度估计导致的部分区域降噪效果差的问题,提高了待处理图像的降噪效果,另外,由于目标滤波系数表征值是滤波系数描述信息确定的,滤波系数描述信息用于描述像素差值与滤波系数表征值之间的对应关系,并且其中的滤波系数表征值与像素差值成负相关,滤波系数表征值的变化程度与运动强度成正相关,因此对于每一个像素值,都能获取到匹配该像素点的目标滤波系数表征值,进一步提高了待处理图像的降噪效果。
在一个实施例中,上述方法还包括:确定多个参考运动强度信息,并确定像素差值分布范围;分别基于每一个参考运动强度信息,确定在像素差值分布范围内的多个像素差值下的滤波系数表征值;建立各个滤波系数表征值和各自对应的像素差值之间的对应关系;将基于同一参考运动强度信息确定的滤波系数表征值所在的对应关系组成滤波系数描述信息,得到各个参考运动强度信息各自对应的滤波系数描述信息;获取与运动强度匹配的滤波系数描述信息,包括:从多个参考运动强度信息中确定与运动强度匹配的目标运动强度信息,将目标运动强度信息对应的滤波系数描述信息确定为与运动强度匹配的滤波系数描述信息。
其中,运动强度参考信息指的是用于作为参考来确定滤波系数描述信息的信息。运动强度参考信息可以是运动强度区间,或者是具体的数值。像素差值分布范围指的是所有可能的像素差值所分布的范围,像素差值分布范围可以为[0,255]。
具体地,计算机设备在确定了多个参考运动强度信息,并确定了像素差值分布范围后,针对每一个参考运动强度信息,在保证滤波系数表征值与像素差值成负相关,滤波系数表征值的变化程度与运动强度成正相关的前提下,确定在像素差值分布范围内各个可能的像素差值下的滤波系数表征值,举例说明,假设像素差值为整数,则针对每一个参考运动强度信息,计算机设备可以确定[0,255]内包括0,1,2……共256个像素差值下的滤波系数表征值。
进一步,计算机设备可以建立各个滤波系数表征值和各自对应的像素差值之间的对应关系,从这些对应关系中,将所包含的滤波系数表征值是基于同一参考运动强度信息确定的对应关系组成滤波系数描述信息,将该滤波系数描述信息作为该参考运动强度信息对应的滤波系数描述信息,从而得到各个参考运动强度信息各自对应的滤波系数描述信息。如上面的例子中,假设参考运动强度信息有2个,表示为X1和X2,基于X1确定的对应关系为(0,b10)、(1,b11)、(255,b1255),基于X2确定的对应关系为(0,b20)、(1,b21)、(255,b2255),其中b代表像素差值,那么计算机设备可以将(0,b10)、(1,b11)、(255,b1255)组成X1对应的滤波系数描述信息,并将(0,b20)、(1,b21)、(255,b2255)组成X2对应的滤波系数描述信息。
在一个具体的实施例中,对于每一个滤波系数描述信息,计算机设备可以将对应关系以如下表格形式进行存储,其中,该表格的索引值为像素差值,参考表1,例如,当目标像素差值为3时,滤波系数表征值为0.88。
表1
计算机设备进一步可以建立参考运动强度信息和各自对应的滤波系数描述信息之间的对应关系,基于该对应关系,在需要获取与待处理块的运动强度匹配的滤波系数描述信息时,计算机设备可以从多个参考运动强度信息中确定与该运动强度匹配的目标运动强度信息,将与该目标运动强度信息存在对应关系的滤波系数描述信息确定为与运动强度匹配的滤波系数描述信息。
在一个具体的实施例中,参考运动强度信息可以是具体的参考运动强度,为具体的数 值,那么计算机设备在确定与某个运动强度匹配的目标运动强度信息时,可以将这些参考运动强度中与该运动强度差值最小的参考运动强度确定为目标参考运动强度,将与该目标参考运动强度存在对应关系的滤波系数描述信息确定为与该运动强度匹配的滤波系数描述信息。举例说明,假设参考运动强度信息包括10、20、30,若某个待处理块的运动强度为12,则可以确定其对应的参考运动强度信息为10,进而将10对应的滤波系数描述信息确定为与该运动强度匹配的滤波系数描述信息。可以理解的是,在其他一些实施例中,计算机设备可以设置数量较多的参考运动强度,将每一个滤波系数描述信息与多个参考运动强度建立对应关系,从而使得建立的对应关系更加准确。例如。计算机设备可以设置10、12、14、16、18、20、22、24、26、28、30,将10、12、14、16与第一滤波系数描述信息建立对应关系,将18、20、22、24、26与第二滤波系数描述信息建立对应关系,将28、30与第三滤波系数描述信息建立对应关系。
上述实施例中,由于滤波系数描述信息描述了在每一个参考运动强度信息下,像素差值和滤波系数表征值之间的对应关系,进而在对待处理图像进行时域滤波时,可以直接将处理图像中的像素点对应的像素差值作为索引,从滤波系数描述信息中查询到对应的滤波系数表征值,避免了通过复杂的计算来得到滤波器系数,提高了滤波效率。
在一个实施例中,分别基于每一个参考运动强度信息,确定在像素差值分布范围内的多个像素差值下的滤波系数表征值,包括:分别基于每一个参考运动强度信息,确定像素差值分布范围内的多个像素差值下的目标滤波器系数;将各个目标滤波器系数与各自对应的像素差值相乘,得到各个像素差值下的滤波系数表征值;基于目标滤波系数表征值,确定待处理图像对应的目标降噪图像,包括:基于像素点的像素值和参考图像中对应位置像素点的像素值之间的大小关系,确定像素点的时域滤波结果,基于像素点的时域滤波结果确定待处理图像对应的目标降噪图像。
具体地,上文中的公式(1)可以进一步变换如下:
Yout(t)(i,j)=(Yout(t-1)(i,j)-Y(i,j))*k+Y(i,j)    (2)
其中,Yout(t-1)(i,j)-Y(i,j)为像素差值,k是滤波器系数,由公式(2)可以看出,如果预先计算出(Yout(t-1)(i,j)-Y(i,j))*k,那么当前时刻的滤波结果可以通过简单的加减法计算得到,从而可以进一步提高滤波效率,基于此,本实施例中,计算机设备可以首先分别基于每一个参考运动强度信息,确定像素差值分布范围内的多个像素差值下的目标滤波器系数,在确定目标滤波器系数时,保证滤波系数表征值与像素差值成负相关,滤波系数表征值的变化程度与运动强度成正相关。
在确定了目标滤波器系数后,将各个目标滤波器系数与各自对应的像素差值相乘,得到各个像素差值下的滤波系数表征值,即将(Yout(t-1)(i,j)-Y(i,j))*k作为滤波系数表征值。举例说明,假设表1中存储的为滤波器系数,则将各个滤波器系数与各自对应的像素差值相乘后得到的滤波系数表征值如下表2所示,表2中,索引值同样为像素差值,举例说明,假设目标像素差值为4,则可以通过查表得到滤波系数表征值为3.36。
表2
可以理解的是,这里与目标滤波器系数相乘的像素差值为像素差值的绝对值,因此(Yout(t-1)(i,j)-Y(i,j))*k的实际计算结果是正值还是负值,需要基于像素点的像素值和参考图像中对应位置像素点的像素值之间的大小关系来确定。
在一个实施例中,基于像素点的像素值和参考图像中对应位置像素点的像素值之间的 大小关系,确定像素点的时域滤波结果,包括:当像素点的像素值大于或等于参考图像中对应位置像素点的像素值时,将像素点的像素值减去目标滤波系数表征值,得到像素点的时域滤波结果;当像素点的像素值小于或等于参考图像中对应位置像素点的像素值时,将像素点的像素值加上目标滤波系数表征值,得到像素点的时域滤波结果。
具体地,当像素点的像素值大于参考图像中对应位置像素点的像素值时,(Yout(t-1)(i,j)-Y(i,j))*k为负值,由公式(2)可以看出,此时的滤波结果相当于用像素点的像素值减去目标滤波系数表征值,当像素点的像素值小于参考图像中对应位置像素点的像素值时,(Yout(t-1)(i,j)-Y(i,j))*k为正值,此时的滤波结果相当于用像素点的像素值加上目标滤波系数表征值,而当当像素点的像素值等于参考图像中对应位置像素点的像素值时,即Yout(t-1)(i,j)=Y(i,j),此时(Yout(t-1)(i,j)-Y(i,j))*k为0,计算机设备将像素点的像素值减去目标滤波系数表征值,得到像素点的时域滤波结果,或者将像素点的像素值加上目标滤波系数表征值,得到像素点的时域滤波结果,可以看出,此时的时域滤波结果即为Yout(t-1)(i,j)或者Y(i,j)
举例说明,如表2中,假设目标像素差值为3,查表得到目标滤波系数表征值为2.64,则当Yout(t-1)(i,j)<Y(i,j)时,时域滤波结果Yout(t)(i,j)=Y(i,j)-3.36,当Yout(t-1)(i,j)>Y(i,j)时,时域滤波结果Yout(t)(i,j)=Y(i,j)+3.36。
进一步地,计算机设备在得到待处理图像中各个像素点的时域滤波结果后,用时域滤波结果更新待处理图像中的像素值,即得到中间处理图像,基于该中间处理图像,计算机设备可以确定待处理图像对应的目标降噪图像。具体参考上文实施例中的描述。
上述实施例中,通过将各个目标滤波器系数与各自对应的像素差值相乘,得到各个像素差值下的滤波系数表征值,可以将复杂的函数运算转换为简答的加减法计算,进一步提高了时域滤波过程中的滤波效率,从而可以提升降噪效率。
在一个实施例中,确定多个参考运动强度信息:将运动强度分布范围划分为多个运动强度区间,将每一个运动强度区间作为一个参考运动强度信息;从多个参考运动强度信息中确定与运动强度匹配的目标运动强度信息,将目标运动强度信息对应的滤波系数描述信息确定为与运动强度匹配的滤波系数描述信息,包括:从多个运动强度区间中确定运动强度所属的目标区间,将目标区间对应的滤波系数描述信息确定为与运动强度匹配的滤波系数描述信息。
其中,运动强度分布范围指的是所有可能的运动强度所分布的范围。
本实施例中,计算机设备可以将运动强度分布范围进行划分为多个区间,得到多个运动强度区间,将每一个运动强度区间作为一个参考运动强度信息,建立运动强度区间与滤波系数描述信息之间的对应关系,进而在确定了待处理块的运动强度后,计算机设备可以判断该运动强度属于哪一个运动强度区间,将该运动强度所属的运动强度区间确定为目标区间,获取与该目标区间存在对应关系的滤波系数描述信息,将该滤波系数描述信息确定为与该运动强度匹配的滤波系数描述信息。
举例说明,假设运动强度分布范围为[a,b],可以对运动强度分布范围进行划分得到三个区间,分别为[a,c],(c,d),[d,b],其中,a<c<d<b,如果待处理块的某个运动强度属于(c,d),则可以将(c,d)确定为目标区间,将其对应的滤波系数描述信息确定为与该运动强度匹配的滤波系数描述信息。
上述实施例中,通过将运动强度分布范围进行划分为多个区间,将每一个运动强度区间作为一个参考运动强度信息,从而可以通过运动强度所属的运动强度区间来确定匹配的滤波系数描述信息,提高了所确定的滤波系数描述信息的准确性。
在一个实施例中,在根据参考图像确定待处理图像中待处理块的运动强度之前,图像降噪方法还包括:将待处理图像中的三原色通道数据转换为亮度通道数据、色度通道数据和浓度通道数据并提取其中的亮度通道数据,将参考图像中的三原色通道数据转换为亮度通道数据、色度通道数据和浓度通道数据并提取其中的亮度通道数据;根据参考图像确定待处理图像中待处理块的运动强度,包括:根据参考图像的亮度通道数据以及待处理块的亮度通道数据,确定待处理块的运动强度;获取待处理块中的像素点和参考图像中对应位置像素点之间的目标像素差值,包括:获取待处理块中的像素点和参考图像中对应位置像素点在亮度通道数据下的像素差值,得到目标像素差值;基于目标滤波系数表征值,确定待处理图像对应的目标降噪图像,包括:基于目标滤波系数表征值,确定对待处理图像的亮度通道数据进行降噪处理得到的目标降噪图像。
其中,三原色通道指的是R(red,红色)、G(green,绿色)和B(blue,蓝色)三个通道。显示器技术就是通过组合不同强度的红绿蓝三原色,来达成几乎任何一种可见光的颜色。在图像储存中,通过记录每个像素的红绿蓝强度,来记录图像的方式为RGB模型,常见的图像文件格式中,PNG和BMP这两种就是基于RGB模型的。除了RGB模型外,还有一种广泛采用的模型,称为YUV模型,又被称为亮度-色度模型(Luma-ChromaModel)。它是通过数学转换,将RGB三通道转换为一个代表亮度的通道(Y,又称为Luma),和两个代表色度的通道(UV,并称为Chroma)来记录图像的模型。
在一个具体的实施例中,已知将RGB三通道数据转为YUV三通道数据的公式具体如下式所示:
Y=0.299*R+0.587*G+0.114*B   (3)
U=-0.169*R-0.331*G+0.5*B   (4)
V=0.5*R-0.419*G-0.081*B   (5)
其中,其中,Y表示亮度通道值、U表示色度通道值、V表示浓度通道值、R表示R通道值、G表示G通道值,以及B表示B通道值。
考虑到上述公式(3)至公式(5)为浮点型的运算,而浮点型的乘法运算,在计算机内部要经过阶码和尾数的运算,相对耗时,因此本实施例中通过数学上的一些变换,将浮点运算变换为整数运算。以Y通道为例,变换过程如下:
Y=0.299*R+0.587*G+0.114*B=128*(0.299*R+0.587*G+0.114*B)>>7=(38*R+75*B+15*
B)>>7
U通道和V通道可以参考Y通道进行同理转换,放大后的数值采取四射五入的方式,经过如此转换,浮点运算就变成了整数乘法和位移运算,而位移运算,效率很高,整数乘法的运算也会较浮点运算的效率高些,因此得到的Y通道数据、U通道数据和V通道数据均为整数值。
在本实施例中,基于人眼对于图像中的亮度通道分量更加敏感的特点,计算机设备可以将待处理图像中的三原色通道数据,转换为Y通道数据、U通道数据和V通道数据后,仅提取其中的Y通道数据,对Y通道数据进行降噪处理,对于参考图像中的三原色通道数据,同样转换为Y通道数据、U通道数据和V通道数据,仅提取其中的Y通道数据,从而可以根据参考图像的亮度通道数据确定待处理块的亮度通道数据的运动强度,获取待处理块中的像素点和参考图像中对应位置像素点在亮度通道数据下的像素差值,得到目标像素差值,最后基于目标滤波系数表征值,确定对待处理图像的亮度通道数据进行降噪处理得到的目标降噪图像。
上述实施例中,将待处理的图像从RGB域转换到YUV域,之后仅对亮度通道Y进行降噪处理,节省了降噪过程中的计算量,提高了图像降噪效率。
在一个实施例中,其特征在于,基于目标滤波系数表征值,确定对待处理图像的亮度通道数据进行降噪处理得到的目标降噪图像,包括:基于目标滤波系数表征值,确定对待处理图像中的亮度通道数据进行时域滤波得到的中间处理数据;基于中间处理数据进行空域降噪,得到待处理图像对应的目标亮度数据;将目标亮度数据和待处理图像的色度通道数据、浓度通道数据进行组合并转换为三原色通道数据,得到目标降噪图像。
本实施例中,由于提取了待处理图像中的亮度通道数据,计算机设备在确定了各个像素点的目标滤波系数表征值后,可以确定对待处理图像中的亮度通道数据进行时域滤波得到的中间处理数据,进一步,可以基于中间处理数据进行空域降噪,得到亮度通道下的目标亮度数据,最后将目标亮度数据和待处理图像之前分离出的色度通道数据以及浓度通道数据进行组合,然后转换为三原色通道数据,即得到目标降噪图像。
上述实施例中,在对Y通道数据进行时域和空域地降噪处理后,进一步将Y通道数据、U通道数据和V通道数据进行组合并转换为RGB数据,得到的目标降噪图像可以更好地满足需求。
在一个实施例中,上述图像降噪方法还包括:基于待处理块的亮度通道数据确定待处理块的亮度表征值;在亮度表征值小于或等于预设亮度阈值的情况下,进入根据参考图像的亮度通道数据以及待处理块的亮度通道数据,确定待处理块的运动强度;在亮度表征值大于预设亮度阈值的情况下,将待处理块的亮度通道数据作为中间处理数据,并进入基于中间处理数据进行空域降噪,得到待处理图像对应的目标亮度数据的步骤。
其中,亮度表征值用于对待处理块的整体亮度进行表征。亮度表征值可以对待处理块中各个像素点的Y通道值进行统计得到,这里的统计可以是求和、求平均值或者求中位数中的其中一种。在一个具体的实施例中,假设当前块为Y,其宽高分别为h,w,则亮度表征值可以通过以下公式(6)进行计算:
根据视觉特性,图像亮度高时,信噪比很高,此时人眼观察不到噪声信号,因此对于亮度值较高的区域进行降噪将带来不必要的性能浪费,本实施例中,通过统计得到待处理块的亮度表征值,对亮度表征值高于某一阈值的待处理块不做时域降噪处理,以达到节省性能的目的。
在一个具体实施例中,参考图6,为图像降噪方法的具体流程示意图,具体包括以下步骤:
步骤602,RGB转YUV并提取Y通道。
具体地,计算机设备可以将待处理图像由RGB格式转换为YUV格式,并提取其中的Y通道数据,将参考图像由RGB格式转换为YUV格式,并提取其中的Y通道数据。
步骤604,进行分块。
具体地,计算机设备可以对待处理图像进行划分,得到多个待处理块。
步骤606,判断块内亮度均值是否超过阈值,若是,则进入步骤612,若否,则进行步骤608。
具体地,计算机设备计算各个待处理块的亮度均值,对于每一个待处理块,若是亮度均值大于亮度阈值,则进入步骤612,若是亮度均值小于或等于亮度阈值则进入步骤608。
步骤608,估计块内运动强度。
具体地,对于亮度均值小于或等于亮度阈值的待处理块,计算机设备可以基于参考图像的Y通道数据确定该待处理块的运动强度。
步骤610,在不同块根据运动强度选择不同时域滤波器。
具体地,在确定了该待处理块的运动强度之后,计算机设备可以获取与该运动强度匹配的滤波系数描述信息,从而对于该亮度均值小于或等于亮度阈值的待处理块中的每个像 素点,计算机设备可以获取该像素点和参考图像中对应位置像素点之间的目标像素差值,基于滤波系数描述信息确定与目标像素差值存在对应关系的目标滤波系数表征值,从而可以得到各个像素点的目标滤波系数表征值,基于各个目标滤波系数表征值,计算机设备可以确定对该待处理块各个像素点的Y通道数据进行时域滤波得到的时域滤波结果,各个像素点的时域滤波结果组成该待处理块的中间处理数据。
步骤612,组合各分块。
步骤614,进行空域滤波。
具体地,计算机设备可以组合各个待处理块的中间处理数据,包括亮度均值大于预设亮度阈值的待处理块的中间处理数据,以及亮度均值小于或等于预设亮度阈值的待处理块的中间处理数据,对组合得到的中间处理数据进行空域降噪,得到待处理图像对应的目标亮度数据。其中,对于亮度均值大于预设亮度阈值的待处理块,由于未做时域降噪处理,其中间处理数据即为该待处理块原始的亮度通道数据。
步骤616,将Y与UV通道组合并完成YUV转RGB。
具体地,计算机设备将目标亮度数据和U通道数据、V通道数据进行组合,然后转换为RGB格式,最终得到目标降噪图像。
上述实施例中,由于将RGB格式数据转换为YUV格式数据,只对Y通道数据进行降噪处理,并且对亮度均值大于预设亮度阈值的待处理块不做时域降噪,可以在满足降噪效果的同时,节省计算机设备的性能,避免了性能浪费。
在一个实施例中,根据参考图像确定待处理图像中待处理块的运动强度包括:确定待处理块相对于参考图像的差异度,以及待处理块相对于参考图像的噪声强度;基于差异度和噪声强度,确定待处理块的运动强度;运动强度和差异度成正相关,运动强度和噪声强度成负相关。
其中,差异度用于表征待处理块相对于参考图像的差异大小,差异越大,差异度越大。噪声强度用于表征待处理块相对于参考图像的噪声大小,噪声越大,噪声强度越大。在一个实施例中,差异度可以通过对待处理块中发生运动的像素点相对于参考图像的像素差值进行统计得到,统计具体可以是求和、求平均值或者中位值,噪声强度可以通过对待处理块中噪声像素点相对于参考图像的像素差值按照与差异度相同的统计方式进行统计得到。在具体应用中,待处理块中发生运动的像素点以及噪声像素点可以通过相对于参考图像的变化幅度来区分,比如,可以将像素差值大于预设阈值的像素点确定为发生运动的像素点,而将像素差值大于预设阈值的像素点确定为噪声像素点。
具体地,对于待处理图像中的待处理块,计算机设备可以确定待处理块相对于参考图像中对应位置处图像块的差异度,以及待处理块相对于参考图像中对应位置处图像块的噪声强度,基于差异度和噪声强度,确定待处理块的运动强度。这里,运动强度和差异度成正相关关系,运动强度和噪声强度成负相关。
其中,正相关关系指的是:在其他条件不变的情况下,两个变量变动方向相同,一个变量由大到小变化时,另一个变量也由大到小变化。可以理解的是,这里的正相关关系是指变化的方向是一致的,但并不是要求当一个变量有一点变化,另一个变量就必须也变化。例如,可以设置当变量a为10至20时,变量b为100,当变量a为20至30时,变量b为120。这样,a与b的变化方向都是当a变大时,b也变大。但在a为10至20的范围内时,b可以是没有变化的。负相关关系指的是:一个变量由大到小变化时,另一个变量也由小到大变化,即两个变量的变化方向是反向。
可见,本实施例中,差异度越大时,运动强度越大,差异度越小时,运动强度越小;噪声强度越大时,运动强度越小,噪声强度越小时,运动强度越大。
在一个实施例中,计算机设备可以通过计算差异度和噪声强度的比值得到运动强度,即运动强度=差异度/噪声强度。
上述实施例中,由于可以基于差异度和噪声强度,确定待处理块的运动强度,并且运 动强度和差异度成正相关,运动强度和噪声强度成负相关,该运动强度可以准确的反映待处理图像的噪声情况,提高了图像降噪的准确性。
在一个实施例中,确定待处理块相对于参考图像的差异度,以及待处理块相对于参考图像的噪声强度包括:获取待处理块中每一个像素点,相对于参考图像中对应位置像素点的像素差值;基于各个像素点对应的像素差值,确定待处理块中的噪声像素点和运动像素点;其中,运动像素点为像素差值大于预设差异阈值的像素点,噪声像素点为像素差值小于或等于预设差异阈值的像素点;统计各个噪声像素点的像素差值得到噪声强度,统计各个运动像素点的像素差值得到差异度。
其中,对于待处理图像中的某个像素点,在参考图像中与之位置对应的像素点指的是与该像素点的像素坐标相同的像素点,例如,假设待处理图像中的某个像素点的像素坐标为(x1,y1),则在参考图像中与之位置对应的像素点在参考图像中的像素坐标同样为(x1,y1)。
具体地,考虑到噪声的运动幅度一般比较小,本实施例中,可以预先设定差异阈值N,对于待处理块中的每一个像素点,当该像素点相对于参考图像中对应位置像素点的像素差值小于或等于该预设差异阈值时,则代表该位置处的像素点在前后帧中像素值差异不大,很有可能是噪声信号,那么计算机设备可以将该像素点确定为噪声像素点;反之,当该像素点相对于参考图像中对应位置像素点的像素差值大于该预设差异阈值时,代表该位置处的像素点很可能发生了移动,那么计算机设备可以将该像素点确定为运动像素点。
计算机设备可以统计该待处理块中各个噪声像素点的像素差值得到噪声强度,例如,计算机设备可以将各个噪声像素点的像素差值相加得到噪声强度。计算机设备可以统计该待处理块中各个运动像素点的像素差值得到差异度,例如,计算机设备可以将各个运动像素点的像素差值相加得到差异度。
需要说明的是,本实施例中,待处理块中每一个像素点,计算机设备获取该像素点相对于参考图像中对应位置像素点的像素差值时,该像素差值指的是绝对差值,即假设待处理块中某个像素点的像素值为X,参考图像中与该像素点位置对应的像素点的像素值为Y,则像素差值为|X-Y|。
上述实施例中,基于像素差值与预设差异阈值之间的大小关系来确定噪声像素点和运动像素点,进而通过统计各个噪声像素点的像素差值可以得到噪声强度,统计各个运动像素点的像素差值可以得到差异度,由于像素差值可以反映各个位置处的像素点是否发生移动,从而提高了运动强度计算的准确性。
在一个实施例中,基于目标滤波系数表征值,确定待处理图像对应的目标降噪图像,包括:基于目标滤波系数表征值,确定对待处理图像进行时域滤波得到的中间处理图像,将中间处理图像分别作为输入图像和引导图像;对输入图像进行下采样得到第一采样图像,对引导图像进行下采样,得到第二采样图像;基于第二采样图像对第一采样图像进行引导滤波,得到目标图像;按照输入图像的尺寸对目标图像进行上采样,得到尺寸和输入图像一致的目标降噪图像。
本实施例中,计算机设备在得到中间处理图像后,将中间处理图像分别作为输入图像和引导图像,进一步对输入图像进行下采样使得输入图像按照目标缩放比例进行缩小,得到第一采样图像,对引导图像进行下采样使得引导图像按照目标缩放比例进行缩小得到第二采样图像,然后基于第二采样图像对第一采样图像进行引导滤波,得到目标图像,该目标图像仍然为尺寸缩小的图像,因此,计算机设备进一步按照输入图像的尺寸对目标图像进行上采样,使得目标图像按照目标缩放比例进行放大,从而得到尺寸和输入图像一致的目标降噪图像。
上述实施例中,在进行引导滤波时,由于输入图像和引导图像是同一个图像,因此能够在图像边缘处保留细节,而在平坦区域进行平滑,尽可能避免了降噪带来的图像模糊问题,同时由于在滤波过程中对输入图像和引导图像进行了下采样,可以降低滤波过程中的 计算复杂度,节省了降噪成本。
在一个具体的实施例中,本申请的图像降噪方法可应用如图7所示的架构中,将摄像头采集的图像序列依次作为待处理图像,通过本申请提供的图像降噪方法得到目标降噪图像后,进行视频编码得到编码数据,将编码数据发送至云端,云端对视频数据进行解码后,将解码后的视频流展示给用户,并且计算机设备本地也可以对编码数据对解码,并将解码后的视频流展示给用户。其中摄像头可来自于计算机设备的内置摄像头,也可以来自于外接摄像头,摄像头采集的原始图像存在了噪声信号,通过本申请的图像降噪处理后能够产生干净的图像,提升了图像质量。同时因为滤除了噪声,后续的视频编码环节无需编码不规则的噪声信号,因此能产生较小的编码文件,节省了带宽和储存空间。
在一个实施例中,如图8所示,本申请还提供一种滤波数据处理方法,该方法由计算机设备执行,该计算机设备可以是图1中的终端102,也可以是图1中的服务器104,还可以是终端102和服务器104所构成的系统。具体地,该滤波数据处理方法包括以下步骤:
步骤802,获取多个参考运动强度信息,并确定像素差值分布范围。
步骤804,分别基于每一个参考运动强度信息,确定在像素差值分布范围内的多个像素差值下的滤波系数表征值。
其中,运动强度参考信息指的是用于作为参考来确定滤波系数描述信息的信息。运动强度参考信息可以是运动强度区间,或者是具体的数值。像素差值分布范围指的是所有可能的像素差值所分布的范围,像素差值分布范围可以为[0,255]。
具体地,计算机设备在获取到多个参考运动强度信息,并确定了像素差值分布范围后,针对每一个参考运动强度信息,在保证滤波系数表征值与像素差值成负相关,滤波系数表征值的变化程度与运动强度成正相关的前提下,确定在像素差值分布范围内各个可能的像素差值下的滤波系数表征值。
步骤806,建立各个滤波系数表征值和各自对应的像素差值之间的对应关系。
步骤808,将基于同一参考运动强度信息确定的滤波系数表征值所在的对应关系组成滤波系数描述信息,得到各个参考运动强度信息各自对应的滤波系数描述信息。
其中,滤波系数描述信息用于对待处理图像进行时域滤波。
具体地,计算机设备可以建立各个滤波系数表征值和各自对应的像素差值之间的对应关系,从这些对应关系中,将所包含的滤波系数表征值是基于同一参考运动强度信息确定的对应关系组成滤波系数描述信息,将该滤波系数描述信息作为该参考运动强度信息对应的滤波系数描述信息,从而得到各个参考运动强度信息各自对应的滤波系数描述信息。
在一个实施例中,计算机设备进一步可以建立参考运动强度信息和各自对应的滤波系数描述信息之间的对应关系,基于该对应关系,在需要对待处理图像进行时域滤波时,可以获取与待处理图像中的待处理块的运动强度匹配的滤波系数描述信息时,计算机设备可以从多个参考运动强度信息中确定与该运动强度匹配的目标运动强度信息,将与该目标运动强度信息存在对应关系的滤波系数描述信息确定为与运动强度匹配的滤波系数描述信息,从而可以基于滤波系数描述信息确定对待处理图像进行时域滤波。
上述滤波数据处理方法中,由于滤波系数描述信息描述了在每一个参考运动强度信息下,像素差值和滤波系数表征值之间的对应关系,进而在对待处理图像进行时域滤波时,可以直接将待处理图像中的像素点对应的像素差值作为索引,从滤波系数描述信息中查询到对应的滤波系数表征值,避免了通过复杂的计算来得到滤波器系数,提高了滤波效率。
在一个实施例中,分别基于每一个参考运动强度信息,确定在像素差值分布范围内的多个像素差值下的滤波系数表征值,包括:分别基于每一个参考运动强度信息,确定像素差值分布范围内的多个像素差值下的目标滤波系数;将各个目标滤波系数与各自对应的像素差值相乘,得到各个像素差值下的滤波系数表征值。
在一个实施例中,上述滤波数据处理方法还包括:获取待处理图像和参考图像;根据 参考图像确定待处理图像中待处理块的运动强度;待处理块是对待处理图像进行划分得到的;获取与运动强度匹配的滤波系数描述信息,滤波系数描述信息用于描述像素差值与滤波系数表征值之间的对应关系;其中,滤波系数表征值与像素差值成负相关,滤波系数表征值的变化程度与运动强度成正相关;获取待处理块中的像素点和参考图像中对应位置像素点之间的目标像素差值,基于滤波系数描述信息确定与目标像素差值存在对应关系的目标滤波系数表征值;基于目标滤波系数表征值,确定待处理图像对应的目标降噪图像。
需要说明的是,关于该实施例的具体描述可参考上文实施例中的描述,本申请在此不赘述。
应该理解的是,虽然如上的各实施例所涉及的流程图中的各个步骤按照箭头的指示依次显示,但是这些步骤并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些步骤的执行并没有严格的顺序限制,这些步骤可以以其它的顺序执行。而且,如上的各实施例所涉及的流程图中的至少一部分步骤可以包括多个步骤或者多个阶段,这些步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,这些步骤或者阶段的执行顺序也不必然是依次进行,而是可以与其它步骤或者其它步骤中的步骤或者阶段的至少一部分轮流或者交替地执行。
在一个实施例中,本申请还提供一种应用场景,在该应用场景中,本申请的图像降噪方法应用于视频会议应用程序中,对视频会议中的视频帧进行降噪处理。具体地,计算机设备可以将会议过程中拍摄得到的视频帧序列,从第二帧开始依次作为待处理图像,对于每一个待处理图像,执行以下步骤:
1、将该待处理图像从RGB格式转换为YUV格式。
2、对待处理图像进行均匀划分,得到多个待处理块。
具体地,假设待处理图像的高和宽分别为H、W,可以将待处理图像划分为m行n列,则每个块的高和宽分别为H/m,W/n。
3、分别将待处理图像中的各个待处理块作为当前待处理块,对于当前待处理块Y,统计该待处理块Y内的亮度均值,判断亮度均值是否超过预设亮度阈值,当超过时,代表该待处理块亮度很高,人眼无法察觉噪声,则该待处理块无需处理,直接进入步骤9。反之,需要进行后续的时域降噪,进入步骤4。
4、进行块内运动强度估计。首先计算每个待处理块在当前时刻(当前帧)t的与前一时刻(前一帧)t-1的帧差D,可公式化为D(t)=|Y(t)-Y(t-1)|。考虑到噪声的幅度一般比较小,人为设定一噪声阈值N,令(i,j)为位置索引,当D(t)i,j≤N时,则代表坐标(i,j)处的像素点在前后帧之间像素值差异不大,很有可能是噪声像素点,反之,当D(t)i,j>N时,代表该位置的像素点很可能发生了移动。定义矩阵VN和VS,VN用于表示噪声像素点的变化幅度,VS用于表示运动像素点的变化幅度,VN和VS的矩阵大小与当前待处理块大小Y一致,当D(t)i,j≤N时,VNi,j=D(t)i,j且VSi,j=0。当D(t)i,j>N时,VNi,j=0且VSi,j=D(t)i,j。最后该待处理块的运动强度S可以通过以下公式(7)计算得到:
5、从多个运动强度区间中确定运动强度所属的目标区间,将目标区间对应的滤波系数描述信息确定为与运动强度匹配的滤波系数描述信息。
其中,滤波系数描述信息通过以下步骤预先得到并以表格的形式存储:
5.1、将运动强度分布范围进行划分,得到多个运动强度区间。
5.2、确定像素差值分布范围,分别基于每一个运动强度区间,确定在像素差值分布范围内的多个像素差值下的目标滤波器系数。
其中,像素差值分布范围为[0,255],像素差值分布范围内的多个像素差值为[0,255]内的各个整数值。在确定目标滤波器系数时,保证滤波器系数与像素差值成负相关,滤波器系数的变化程度与运动强度区间内的运动强度值成正相关。
5.3、将各个目标滤波器系数与各自对应的像素差值相乘,得到各个像素差值下的滤波系数表征值。
5.4、建立各个滤波系数表征值和各自对应的像素差值之间的对应关系。
5.5、将基于同一运动强度区间确定的滤波系数表征值所在的对应关系组成滤波系数描述信息,得到各个运动强度区间各自对应的滤波系数描述信息。
具体地,滤波系数描述信息可以以表格的形式进行存储,像素差值作为该表格的索引值。具体可以参考上文中的表2。
6、获取当前待处理块中各个像素点分别和参考图像中对应位置像素点之间的目标像素差值,将各个目标像素差值分别作为索引值,从与当前待处理块的运动强度匹配的滤波系数描述信息中,查询得到与各个目标像素差值存在对应关系的目标滤波系数表征值。
其中,参考图像为t-1时刻(即上一帧)的视频帧对应的中间处理图像,即将t-1时刻的视频帧作为待处理图像时,对该待处理图像进行时域滤波得到的图像,本实施例中,时域滤波过程采用递归滤波,每一帧视频帧在采用本实施例提供的图像降噪方法得到时域滤波输出的中间处理图像后,将该中间处理图像进行保存作为下一帧视频帧的参考图像。
7、对于当前待处理块中的各个像素点,当该像素点的像素值大于或等于参考图像中对应位置像素点的像素值时,将该像素点的像素值减去目标滤波系数表征值,得到该像素点的时域滤波结果,当该像素点的像素值小于或等于参考图像中对应位置像素点的像素值时,将该像素点的像素值加上目标滤波系数表征值,得到该像素点的时域滤波结果。
8、基于各个像素点的时域滤波结果确定对于当前待处理块进行时域滤波得到的中间处理数据。
9、组合各个待处理的中间处理数据,对组合得到的中间处理数据进行空域降噪,得到待处理图像对应的目标亮度数据。
其中,对于亮度均值大于预设亮度阈值的待处理块,由于未做时域降噪处理,其中间处理数据即为该待处理块原始的亮度通道数据。
可以理解的是,此处组合得到的中间处理数据所形成的图像即为待处理图像对应的中间处理图像,可以将该中间处理图像进行保存,作为下一帧待处理图像的参考图像。
10、将目标亮度数据和待处理图像的U通道数据、V通道数据进行组合并转换为RGB数据,得到目标降噪图像。
在另一个实施例中,本申请还提供另一种应用场景,在该应用场景中,本申请的图像降噪方法应用于视频直播应用程序,对直播过程中的视频帧进行降噪处理,具体地,计算机设备可以将直播过程中采集的得到的视频帧序列,从第二帧开始依次作为待处理图像,对于每一个待处理图像,执行本申请实施例提供的图像降噪方法得到目标降噪图像,从而可以提升直播过程中的视频视觉质量。其中,在本实施例中,滤波系数描述信息中存储的是滤波器系数和像素差值之间的对应关系,对于亮度均值大于或等于预设亮度均值的待处理块中的各个像素点,计算机设备在获取到各个像素点对应的像素差值后,可以从与该待处理块的运动强度匹配的滤波系数描述信息中查询得到目标滤波器系数,进而可以通过上文中的公式(1)计算得的各个像素点的时域滤波结果。
结合上文实施例可知,本申请的图像降噪方法,通过对待处理图像进行分块,并统计每一块内的亮度、运动程度与噪声强度等信息,进而针对不同块采用不同的强度的时域滤 波器,以实现区域运动自适应降噪。经时域滤波后,接着对图像进行空域上的引导滤波,进一步消除残留噪声,并保留边缘信息,避免模糊,可以在较低计算量下,有效的解决了运动拖影与降噪后图像模糊的问题,达到了优秀的降噪效果。以下实施例分别举例对申请的降效效果进行说明。
在一个具体的实施例中,如图9所示,为本申请实施例提供的图像降噪方法的效果示意图,图9中的(a)图为降噪前的图像示意,图9中的(b)图为降噪后的图像示意,为更好地展示效果,图9中对部分区域(原图中的方框内的区域)进行了放大,由图9可以看出,降噪前的图像中存在很多噪声颗粒,通过本申请实施例提供的图像降噪方法进行降噪处理后,颗粒减少,图像变得清晰而平滑。并且,本申请实施例提供的图像降噪方法对背景区域和前景区域都可以到达很好的降噪效果。
在一个具体的实施例中,如图10所示,为本申请实施例提供的图像降噪方法的效果示意图,图10中的(a)图为降噪前的图像示意,图10中的(b)图为降噪后的图像示意,由图10可以看出,降噪前的图像中存在频繁抖动的噪点,造成不适,通过本申请实施例提供的图像降噪方法进行降噪处理后,这些抖动点将消失。
基于同样的发明构思,本申请实施例还提供了一种用于实现上述所涉及的图像降噪方法的图像降噪装置以及一种用于实现上述所涉及的滤波数据处理方法的滤波数据处理装置。该装置所提供的解决问题的实现方案与上述方法中所记载的实现方案相似,故下面所提供的一个或多个图像降噪装置、滤波数据处理装置实施例中的具体限定可以参见上文中各方法实施例中的限定。
在一个实施例中,如图11所示,提供了一种图像降噪装置1100,包括:
图像获取模块1102,用于获取待处理图像和参考图像;
运动强度确定模块1104,用于根据参考图像确定待处理图像中待处理块的运动强度;待处理块是对待处理图像进行划分得到的;
描述信息获取模块1106,用于获取与运动强度匹配的滤波系数描述信息,滤波系数描述信息用于描述像素差值与滤波系数表征值之间的对应关系;其中,滤波系数表征值与像素差值成负相关,滤波系数表征值的变化程度与运动强度成正相关;
滤波系数确定模块1108,用于获取待处理块中的像素点和参考图像中对应位置像素点之间的目标像素差值,基于滤波系数描述信息确定与目标像素差值存在对应关系的目标滤波系数表征值;
降噪图像确定模块1110,用于基于目标滤波系数表征值,确定待处理图像对应的目标降噪图像。
上述图像降噪装置中,由于待处理块是对待处理图像进行划分得到的,对于待处理图像中的不同的待处理块可以确定不同的目标滤波系数表征值,得到的目标滤波系数表征值可以和图像中各个区域的运动情况精确匹配,避免了对整张图像进行运动强度估计导致的部分区域降噪效果差的问题,提高了待处理图像的降噪效果,另外,由于目标滤波系数表征值是滤波系数描述信息确定的,滤波系数描述信息用于描述像素差值与滤波系数表征值之间的对应关系,并且其中的滤波系数表征值与像素差值成负相关,滤波系数表征值的变化程度与运动强度成正相关,因此对于每一个像素值,都能获取到匹配该像素点的目标滤波系数表征值,进一步提高了待处理图像的降噪效果。
在一个实施例中,上述图像处理装置还包括:描述信息确定模块,用于确定多个参考运动强度信息,并确定像素差值分布范围;分别基于每一个参考运动强度信息,确定在像素差值分布范围内的多个像素差值下的滤波系数表征值;建立各个滤波系数表征值和各自对应的像素差值之间的对应关系;将基于同一参考运动强度信息确定的滤波系数表征值所在的对应关系组成滤波系数描述信息,得到各个参考运动强度信息各自对应的滤波系数描述信息;描述信息获取模块1106,还用于从多个参考运动强度信息中确定与运动强度匹配 的目标运动强度信息,将目标运动强度信息对应的滤波系数描述信息确定为与运动强度匹配的滤波系数描述信息。
在一个实施例中,描述信息确定模块,还用于分别基于每一个参考运动强度信息,确定像素差值分布范围内的多个像素差值下的目标滤波器系数;将各个目标滤波器系数与各自对应的像素差值相乘,得到各个像素差值下的滤波系数表征值;降噪图像确定模块,还用于基于像素点的像素值和参考图像中对应位置像素点的像素值之间的大小关系,确定像素点的时域滤波结果,基于像素点的时域滤波结果确定待处理图像对应的目标降噪图像。
在一个实施例中,降噪图像确定模块,还用于当像素点的像素值大于或等于参考图像中对应位置像素点的像素值时,将像素点的像素值减去目标滤波系数表征值,得到像素点的时域滤波结果;当像素点的像素值小于或等于参考图像中对应位置像素点的像素值时,将像素点的像素值加上目标滤波系数表征值,得到像素点的时域滤波结果。
在一个实施例中,描述信息确定模块,还用于将运动强度分布范围划分为多个运动强度区间,将每一个运动强度区间作为一个参考运动强度信息;描述信息获取模块1106,还用于从多个运动强度区间中确定运动强度所属的目标区间,将目标区间对应的滤波系数描述信息确定为与运动强度匹配的滤波系数描述信息。
在一个实施例中,上述装置还包括格式转换模块,用于将待处理图像中的三原色通道数据转换为亮度通道数据、色度通道数据和浓度通道数据并提取其中的亮度通道数据,将参考图像中的三原色通道数据转换为亮度通道数据、色度通道数据和浓度通道数据并提取其中的亮度通道数据;运动强度确定模块,还用于根据参考图像的亮度通道数据以及待处理块的亮度通道数据,确定待处理块的运动强度;滤波系数确定模块,还用于获取待处理块中的像素点和参考图像中对应位置像素点在亮度通道数据下的像素差值,得到目标像素差值;降噪图像确定模块,还用于基于目标滤波系数表征值,确定对待处理图像的亮度通道数据进行降噪处理得到的目标降噪图像。
在一个实施例中,降噪图像确定模块,还用于基于目标滤波系数表征值,确定对待处理图像中的亮度通道数据进行时域滤波得到的中间处理数据;基于中间处理数据进行空域降噪,得到待处理图像对应的目标亮度数据;将目标亮度数据和待处理图像的色度通道数据、浓度通道数据进行组合并转换为三原色通道数据,得到目标降噪图像。
在一个实施例中,上述装置还包括:亮度表征值确定模块,用于基于待处理块的亮度通道数据确定待处理块的亮度表征值;在亮度表征值小于或等于预设亮度阈值的情况下,进入根据参考图像的亮度通道数据以及待处理块的亮度通道数据,确定待处理块的运动强度;在亮度表征值大于预设亮度阈值的情况下,将待处理块的亮度通道数据作为中间处理数据,并进入基于中间处理数据进行空域降噪,得到待处理图像对应的目标亮度数据的步骤。
在一个实施例中,运动强度确定模块,还用于确定待处理块相对于参考图像的差异度,以及待处理块相对于参考图像的噪声强度;基于差异度和噪声强度,确定待处理块的运动强度;运动强度和差异度成正相关,运动强度和噪声强度成负相关。
在一个实施例中,运动强度确定模块,还用于获取待处理块中每一个像素点,相对于参考图像中对应位置像素点的像素差值;基于各个像素点对应的像素差值,确定待处理块中的噪声像素点和运动像素点;其中,运动像素点为像素差值大于预设差异阈值的像素点,噪声像素点为像素差值小于或等于预设差异阈值的像素点;统计各个噪声像素点的像素差值得到噪声强度,统计各个运动像素点的像素差值得到差异度。
在一个实施例中,降噪图像确定模块,还用于基于目标滤波系数表征值,确定对待处理图像进行时域滤波得到的中间处理图像,将中间处理图像分别作为输入图像和引导图像;对输入图像进行下采样得到第一采样图像,对引导图像进行下采样,得到第二采样图像;基于第二采样图像对第一采样图像进行引导滤波,得到目标图像;按照输入图像的尺寸对目标图像进行上采样,得到尺寸和输入图像一致的目标降噪图像。
在一个实施例中,图像获取模块,还用于确定待降噪的目标视频;将目标视频中的视频帧作为待处理图像,从待处理图像对应的前向视频帧中确定目标视频帧;获取目标视频帧对应的目标降噪图像,将目标视频帧对应的目标降噪图像确定为待处理图像对应的参考图像。
在一个实施例中,如图12所示,提供了一种滤波数据处理装置1200,包括:
参考运动强度确定模块1202,用于确定多个参考运动强度信息,并确定像素差值分布范围;
表征值确定模块1204,用于分别基于每一个参考运动强度信息,确定在像素差值分布范围内的多个像素差值下的滤波系数表征值;
对应关系建立模块1206,用于建立各个滤波系数表征值和各自对应的像素差值之间的对应关系;
描述信息确定模块1208,用于将基于同一参考运动强度信息确定的滤波系数表征值所在的对应关系组成滤波系数描述信息,得到各个参考运动强度信息各自对应的滤波系数描述信息;其中,滤波系数描述信息用于对待处理图像进行时域滤波。
上述滤波数据处理方法、装置、计算机设备、存储介质和计算机程序产品,由于滤波系数描述信息描述了在每一个参考运动强度信息下,像素差值和滤波系数表征值之间的对应关系,进而在对待处理图像进行时域滤波时,可以直接将待处理图像中的像素点对应的像素差值作为索引,从滤波系数描述信息中查询到对应的滤波系数表征值,避免了通过复杂的计算来得到滤波器系数,提高了滤波效率。
在一个实施例中,表征值确定模块,用于分别基于每一个参考运动强度信息,确定像素差值分布范围内的多个像素差值下的目标滤波系数;将各个目标滤波系数与各自对应的像素差值相乘,得到各个像素差值下的滤波系数表征值。
上述图像降噪装置、滤波数据处理装置中的各个模块可全部或部分通过软件、硬件及其组合来实现。上述各模块可以硬件形式内嵌于或独立于计算机设备中的处理器中,也可以以软件形式存储于计算机设备中的存储器中,以便于处理器调用执行以上各个模块对应的操作。
在一个实施例中,提供了一种计算机设备,该计算机设备可以是服务器,其内部结构图可以如图13所示。该计算机设备包括处理器、存储器、输入/输出接口(Input/Output,简称I/O)和通信接口。其中,处理器、存储器和输入/输出接口通过系统总线连接,通信接口通过输入/输出接口连接到系统总线。其中,该计算机设备的处理器用于提供计算和控制能力。该计算机设备的存储器包括非易失性存储介质和内存储器。该非易失性存储介质存储有操作系统、计算机程序和数据库。该内存储器为非易失性存储介质中的操作系统和计算机程序的运行提供环境。该计算机设备的数据库用于存储图像数据、滤波系数描述信息等数据。该计算机设备的输入/输出接口用于处理器与外部设备之间交换信息。该计算机设备的通信接口用于与外部的终端通过网络连接通信。该计算机程序被处理器执行时以实现一种图像降噪方法或者一种滤波数据处理方法。
在一个实施例中,提供了一种计算机设备,该计算机设备可以是终端,其内部结构图可以如图14所示。该计算机设备包括处理器、存储器、输入/输出接口、通信接口、显示单元和输入装置。其中,处理器、存储器和输入/输出接口通过系统总线连接,通信接口、显示单元和输入装置通过输入/输出接口连接到系统总线。其中,该计算机设备的处理器用于提供计算和控制能力。该计算机设备的存储器包括非易失性存储介质、内存储器。该非易失性存储介质存储有操作系统和计算机程序。该内存储器为非易失性存储介质中的操作系统和计算机程序的运行提供环境。该计算机设备的输入/输出接口用于处理器与外部设备之间交换信息。该计算机设备的通信接口用于与外部的终端进行有线或无线方式的通信,无线方式可通过WIFI、移动蜂窝网络、NFC(近场通信)或其他技术实现。该计算机程序被处理器执行时以实现一种图像降噪方法或者一种滤波数据处理方法。该计算机设备的显 示单元用于形成视觉可见的画面,可以是显示屏、投影装置或虚拟现实成像装置,显示屏可以是液晶显示屏或电子墨水显示屏,该计算机设备的输入装置可以是显示屏上覆盖的触摸层,也可以是计算机设备外壳上设置的按键、轨迹球或触控板,还可以是外接的键盘、触控板或鼠标等。
本领域技术人员可以理解,图13和图14中示出的结构,仅仅是与本申请方案相关的部分结构的框图,并不构成对本申请方案所应用于其上的计算机设备的限定,具体的计算机设备可以包括比图中所示更多或更少的部件,或者组合某些部件,或者具有不同的部件布置。
在一个实施例中,提供了一种计算机设备,包括存储器和处理器,存储器中存储有计算机程序,该处理器执行计算机程序时实现上述图像降噪方法或者滤波数据处理方法的步骤。
在一个实施例中,提供了一种计算机可读存储介质,其上存储有计算机程序,计算机程序被处理器执行时实现上述图像降噪方法或者滤波数据处理方法的步骤。
在一个实施例中,提供了一种计算机程序产品,包括计算机程序,该计算机程序被处理器执行时实现上述图像降噪方法或者滤波数据处理方法的步骤。
需要说明的是,本申请所涉及的用户信息(包括但不限于用户设备信息、用户个人信息等)和数据(包括但不限于用于分析的数据、存储的数据、展示的数据等),均为经用户授权或者经过各方充分授权的信息和数据,且相关数据的收集、使用和处理需要遵守相关国家和地区的相关法律法规和标准。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,所述的计算机程序可存储于一非易失性计算机可读取存储介质中,该计算机程序在执行时,可包括如上述各方法的实施例的流程。其中,本申请所提供的各实施例中所使用的对存储器、数据库或其它介质的任何引用,均可包括非易失性和易失性存储器中的至少一种。非易失性存储器可包括只读存储器(Read-Only Memory,ROM)、磁带、软盘、闪存、光存储器、高密度嵌入式非易失性存储器、阻变存储器(ReRAM)、磁变存储器(Magnetoresistive Random Access Memory,MRAM)、铁电存储器(Ferroelectric Random Access Memory,FRAM)、相变存储器(Phase Change Memory,PCM)、石墨烯存储器等。易失性存储器可包括随机存取存储器(Random Access Memory,RAM)或外部高速缓冲存储器等。作为说明而非局限,RAM可以是多种形式,比如静态随机存取存储器(Static Random Access Memory,SRAM)或动态随机存取存储器(Dynamic Random Access Memory,DRAM)等。本申请所提供的各实施例中所涉及的数据库可包括关系型数据库和非关系型数据库中至少一种。非关系型数据库可包括基于区块链的分布式数据库等,不限于此。本申请所提供的各实施例中所涉及的处理器可为通用处理器、中央处理器、图形处理器、数字信号处理器、可编程逻辑器、基于量子计算的数据处理逻辑器等,不限于此。
以上实施例的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本说明书记载的范围。
以上所述实施例仅表达了本申请的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对本申请专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本申请构思的前提下,还可以做出若干变形和改进,这些都属于本申请的保护范围。因此,本申请的保护范围应以所附权利要求为准。

Claims (20)

  1. 一种图像降噪方法,由计算机设备执行,所述方法包括:
    获取待处理图像和参考图像;
    根据所述参考图像确定所述待处理图像中待处理块的运动强度;所述待处理块是对所述待处理图像进行划分得到的;
    获取与所述运动强度匹配的滤波系数描述信息,所述滤波系数描述信息用于描述像素差值与滤波系数表征值之间的对应关系;
    其中,所述滤波系数表征值与所述像素差值成负相关,所述滤波系数表征值的变化程度与所述运动强度成正相关;及
    获取所述待处理块中的像素点和所述参考图像中对应位置像素点之间的目标像素差值,基于所述滤波系数描述信息确定与所述目标像素差值存在对应关系的目标滤波系数表征值;
    基于所述目标滤波系数表征值,确定所述待处理图像对应的目标降噪图像。
  2. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    确定多个参考运动强度信息,并确定像素差值分布范围;
    分别基于每一个参考运动强度信息,确定在像素差值分布范围内的多个像素差值下的滤波系数表征值;
    建立各个滤波系数表征值和各自对应的像素差值之间的对应关系;
    将基于同一参考运动强度信息确定的滤波系数表征值所在的对应关系组成滤波系数描述信息,得到各个参考运动强度信息各自对应的滤波系数描述信息;及
    所述获取与所述运动强度匹配的滤波系数描述信息,包括:
    从所述多个参考运动强度信息中确定与所述运动强度匹配的目标运动强度信息,将所述目标运动强度信息对应的滤波系数描述信息确定为与所述运动强度匹配的滤波系数描述信息。
  3. 根据权利要求2所述的方法,其特征在于,所述分别基于每一个参考运动强度信息,确定在像素差值分布范围内的多个像素差值下的滤波系数表征值,包括:
    分别基于每一个参考运动强度信息,确定像素差值分布范围内的多个像素差值下的目标滤波器系数;
    将各个目标滤波器系数与各自对应的像素差值相乘,得到各个像素差值下的滤波系数表征值;及
    所述基于所述目标滤波系数表征值,确定所述待处理图像对应的目标降噪图像,包括:
    基于所述像素点的像素值和所述参考图像中对应位置像素点的像素值之间的大小关系,确定所述像素点的时域滤波结果,基于所述像素点的时域滤波结果确定所述待处理图像对应的目标降噪图像。
  4. 根据权利要求3所述的方法,其特征在于,所述基于所述像素点的像素值和所述参考图像中对应位置像素点的像素值之间的大小关系,确定所述像素点的时域滤波结果,包括:
    当所述像素点的像素值大于或等于所述参考图像中对应位置像素点的像素值时,将所述像素点的像素值减去所述目标滤波系数表征值,得到所述像素点的时域滤波结果;及
    当所述像素点的像素值小于或等于所述参考图像中对应位置像素点的像素值时,将所述像素点的像素值加上所述目标滤波系数表征值,得到所述像素点的时域滤波结果。
  5. 根据权利要求2至4中任意一项所述的方法,其特征在于,所述确定多个参考运动强度信息包括:
    将运动强度分布范围划分为多个运动强度区间,将每一个运动强度区间作为一个参考运动强度信息;及
    所述从所述多个参考运动强度信息中确定与所述运动强度匹配的目标运动强度信息,将所述目标运动强度信息对应的滤波系数描述信息确定为与所述运动强度匹配的滤波系数描述信息,包括:
    从所述多个运动强度区间中确定所述运动强度所属的目标区间,将所述目标区间对应的滤波系数描述信息确定为与所述运动强度匹配的滤波系数描述信息。
  6. 根据权利要求1所述的方法,其特征在于,在所述根据所述参考图像确定所述待处理图像中待处理块的运动强度之前,所述方法还包括:
    将所述待处理图像中的三原色通道数据转换为亮度通道数据、色度通道数据和浓度通道数据并提取其中的亮度通道数据,将所述参考图像中的三原色通道数据转换为亮度通道数据、色度通道数据和浓度通道数据并提取其中的亮度通道数据;
    所述根据所述参考图像确定所述待处理图像中待处理块的运动强度,包括:
    根据所述参考图像的亮度通道数据以及所述待处理块的亮度通道数据,确定所述待处理块的运动强度;
    所述获取所述待处理块中的像素点和所述参考图像中对应位置像素点之间的目标像素差值,包括:
    获取所述待处理块中的像素点和所述参考图像中对应位置像素点在亮度通道数据下的像素差值,得到目标像素差值;及
    所述基于所述目标滤波系数表征值,确定所述待处理图像对应的目标降噪图像,包括:
    基于所述目标滤波系数表征值,确定对所述待处理图像的亮度通道数据进行降噪处理得到的目标降噪图像。
  7. 根据权利要求6所述的方法,其特征在于,所述基于所述目标滤波系数表征值,确定对所述待处理图像的亮度通道数据进行降噪处理得到的目标降噪图像,包括:
    基于所述目标滤波系数表征值,确定对所述待处理图像中的亮度通道数据进行时域滤波得到的中间处理数据;
    基于所述中间处理数据进行空域降噪,得到所述待处理图像对应的目标亮度数据;及将所述目标亮度数据和所述待处理图像的色度通道数据、浓度通道数据进行组合并转换为三原色通道数据,得到目标降噪图像。
  8. 根据权利要求7所述的方法,其特征在于,所述方法还包括:
    基于所述待处理块的亮度通道数据确定所述待处理块的亮度表征值;
    在所述亮度表征值小于或等于预设亮度阈值的情况下,进入所述根据所述参考图像的亮度通道数据以及所述待处理块的亮度通道数据,确定所述待处理块的运动强度;及
    在所述亮度表征值大于预设亮度阈值的情况下,将所述待处理块的亮度通道数据作为中间处理数据,并进入所述基于所述中间处理数据进行空域降噪,得到所述待处理图像对应的目标亮度数据的步骤。
  9. 根据权利要求1所述的方法,其特征在于,所述根据所述参考图像确定所述待处理图像中待处理块的运动强度,包括:
    确定所述待处理块相对于所述参考图像的差异度,以及所述待处理块相对于所述参考图像的噪声强度;及
    基于所述差异度和所述噪声强度,确定所述待处理块的运动强度;所述运动强度和所述差异度成正相关,所述运动强度和所述噪声强度成负相关。
  10. 根据权利要求9所述的方法,其特征在于,所述确定所述待处理块相对于所述参考图像的差异度,以及所述待处理块相对于所述参考图像的噪声强度包括:
    获取所述待处理块中每一个像素点,相对于所述参考图像中对应位置像素点的像素差值;
    基于各个像素点对应的像素差值,确定所述待处理块中的噪声像素点和运动像素点;其中,所述运动像素点为像素差值大于预设差异阈值的像素点,所述噪声像素点为像素差 值小于或等于预设差异阈值的像素点;及
    统计各个噪声像素点的像素差值得到噪声强度,统计各个运动像素点的像素差值得到差异度。
  11. 根据权利要求1所述的方法,其特征在于,所述基于所述目标滤波系数表征值,确定所述待处理图像对应的目标降噪图像,包括:
    基于所述目标滤波系数表征值,确定对所述待处理图像进行时域滤波得到的中间处理图像,将所述中间处理图像分别作为输入图像和引导图像;
    对所述输入图像进行下采样得到第一采样图像,对所述引导图像进行下采样,得到第二采样图像;
    基于所述第二采样图像对所述第一采样图像进行引导滤波,得到目标图像;及
    按照所述输入图像的尺寸对所述目标图像进行上采样,得到尺寸和所述输入图像一致的目标降噪图像。
  12. 根据权利要求1至11中任意一项所述的方法,其特征在于,所述获取待处理图像和参考图像,包括:
    确定待降噪的目标视频;
    将所述目标视频中的视频帧作为待处理图像,从所述待处理图像对应的前向视频帧中确定目标视频帧;及
    获取所述目标视频帧对应的目标降噪图像,将所述目标视频帧对应的目标降噪图像确定为所述待处理图像对应的参考图像。
  13. 一种滤波数据处理方法,由计算机设备执行,所述方法包括:
    确定多个参考运动强度信息,并确定像素差值分布范围;
    分别基于每一个参考运动强度信息,确定在像素差值分布范围内的多个像素差值下的滤波系数表征值;
    建立各个滤波系数表征值和各自对应的像素差值之间的对应关系;及
    将基于同一参考运动强度信息确定的滤波系数表征值所在的对应关系组成滤波系数描述信息,得到各个参考运动强度信息各自对应的滤波系数描述信息;
    其中,所述滤波系数描述信息用于对待处理图像进行时域滤波。
  14. 根据权利要求13所述的方法,其特征在于,所述分别基于每一个参考运动强度信息,确定在像素差值分布范围内的多个像素差值下的滤波系数表征值,包括:
    分别基于每一个参考运动强度信息,确定像素差值分布范围内的多个像素差值下的目标滤波系数;及
    将各个目标滤波系数与各自对应的像素差值相乘,得到各个像素差值下的滤波系数表征值。
  15. 根据权利要求13至14任意一项所述的方法,其特征在于,所述方法还包括:
    获取待处理图像和参考图像;
    根据所述参考图像确定所述待处理图像中待处理块的运动强度;所述待处理块是对所述待处理图像进行划分得到的;
    获取与所述运动强度匹配的滤波系数描述信息,所述滤波系数描述信息用于描述像素差值与滤波系数表征值之间的对应关系,所述滤波系数表征值与所述像素差值成负相关,所述滤波系数表征值的变化程度与所述运动强度成正相关;及
    获取所述待处理块中的像素点和所述参考图像中对应位置像素点之间的目标像素差值,基于所述滤波系数描述信息确定与所述目标像素差值存在对应关系的目标滤波系数表征值;
    基于所述目标滤波系数表征值,确定所述待处理图像对应的目标降噪图像。
  16. 一种图像降噪装置,其特征在于,所述装置包括:
    图像获取模块,用于获取待处理图像和参考图像;
    运动强度确定模块,用于根据所述参考图像确定所述待处理图像中待处理块的运动强度;所述待处理块是对所述待处理图像进行划分得到的;
    描述信息获取模块,用于获取与所述运动强度匹配的滤波系数描述信息,所述滤波系数描述信息用于描述像素差值与滤波系数表征值之间的对应关系;其中,所述滤波系数表征值与所述像素差值成负相关,所述滤波系数表征值的变化程度与所述运动强度成正相关;
    滤波系数确定模块,用于获取所述待处理块中的像素点和所述参考图像中对应位置像素点之间的目标像素差值,基于所述滤波系数描述信息确定与所述目标像素差值存在对应关系的目标滤波系数表征值;及
    降噪图像确定模块,用于基于所述目标滤波系数表征值,确定所述待处理图像对应的目标降噪图像。
  17. 一种滤波数据处理装置,其特征在于,所述装置包括:
    参考运动强度确定模块,用于确定多个参考运动强度信息,并确定像素差值分布范围;
    表征值确定模块,用于分别基于每一个参考运动强度信息,确定在像素差值分布范围内的多个像素差值下的滤波系数表征值;
    对应关系建立模块,用于建立各个滤波系数表征值和各自对应的像素差值之间的对应关系;及
    描述信息确定模块,用于将基于同一参考运动强度信息确定的滤波系数表征值所在的对应关系组成滤波系数描述信息,得到各个参考运动强度信息各自对应的滤波系数描述信息;其中,所述滤波系数描述信息用于对待处理图像进行时域滤波。
  18. 一种计算机设备,包括存储器和处理器,所述存储器存储有计算机程序,其特征在于,所述处理器执行所述计算机程序时实现权利要求1至12或者13至15中任一项所述的方法的步骤。
  19. 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现权利要求1至12或者13至15中任一项所述的方法的步骤。
  20. 一种计算机程序产品,包括计算机程序,其特征在于,该计算机程序被处理器执行时实现权利要求1至12或者13至15中任一项所述的方法的步骤。
PCT/CN2023/084674 2022-05-27 2023-03-29 图像降噪、滤波数据处理方法、装置和计算机设备 WO2023226584A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210590036.4 2022-05-27
CN202210590036.4A CN115131229A (zh) 2022-05-27 2022-05-27 图像降噪、滤波数据处理方法、装置和计算机设备

Publications (1)

Publication Number Publication Date
WO2023226584A1 true WO2023226584A1 (zh) 2023-11-30

Family

ID=83378442

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/084674 WO2023226584A1 (zh) 2022-05-27 2023-03-29 图像降噪、滤波数据处理方法、装置和计算机设备

Country Status (2)

Country Link
CN (1) CN115131229A (zh)
WO (1) WO2023226584A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115131229A (zh) * 2022-05-27 2022-09-30 腾讯科技(深圳)有限公司 图像降噪、滤波数据处理方法、装置和计算机设备
CN116205810B (zh) * 2023-02-13 2024-03-19 爱芯元智半导体(上海)有限公司 视频降噪方法、装置和电子设备

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020080882A1 (en) * 2000-12-21 2002-06-27 Matsushita Electric Industrial Co., Ltd. Noise reducing apparatus and noise reducing method
CN1901620A (zh) * 2005-07-19 2007-01-24 中兴通讯股份有限公司 一种基于运动检测和自适应滤波的视频图像降噪方法
CN102238316A (zh) * 2010-04-29 2011-11-09 北京科迪讯通科技有限公司 一种3d数字视频图像的自适应实时降噪方案
CN111652814A (zh) * 2020-05-26 2020-09-11 浙江大华技术股份有限公司 一种视频图像的去噪方法、装置、电子设备及存储介质
CN113612996A (zh) * 2021-07-30 2021-11-05 百果园技术(新加坡)有限公司 一种基于时域滤波的视频降噪的方法及装置
CN115131229A (zh) * 2022-05-27 2022-09-30 腾讯科技(深圳)有限公司 图像降噪、滤波数据处理方法、装置和计算机设备

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020080882A1 (en) * 2000-12-21 2002-06-27 Matsushita Electric Industrial Co., Ltd. Noise reducing apparatus and noise reducing method
CN1901620A (zh) * 2005-07-19 2007-01-24 中兴通讯股份有限公司 一种基于运动检测和自适应滤波的视频图像降噪方法
CN102238316A (zh) * 2010-04-29 2011-11-09 北京科迪讯通科技有限公司 一种3d数字视频图像的自适应实时降噪方案
CN111652814A (zh) * 2020-05-26 2020-09-11 浙江大华技术股份有限公司 一种视频图像的去噪方法、装置、电子设备及存储介质
CN113612996A (zh) * 2021-07-30 2021-11-05 百果园技术(新加坡)有限公司 一种基于时域滤波的视频降噪的方法及装置
CN115131229A (zh) * 2022-05-27 2022-09-30 腾讯科技(深圳)有限公司 图像降噪、滤波数据处理方法、装置和计算机设备

Also Published As

Publication number Publication date
CN115131229A (zh) 2022-09-30

Similar Documents

Publication Publication Date Title
Wang et al. An experiment-based review of low-light image enhancement methods
US20230419463A1 (en) System and method for real-time tone-mapping
WO2023226584A1 (zh) 图像降噪、滤波数据处理方法、装置和计算机设备
CN108694705B (zh) 一种多帧图像配准与融合去噪的方法
Rao et al. A Survey of Video Enhancement Techniques.
WO2018082185A1 (zh) 图像处理方法和装置
US20060050783A1 (en) Apparatus and method for adaptive 3D artifact reducing for encoded image signal
Nafchi et al. CorrC2G: Color to gray conversion by correlation
WO2017084258A1 (zh) 编码过程中的实时视频降噪方法、终端和非易失性计算机可读存储介质
Liu et al. Graph-based joint dequantization and contrast enhancement of poorly lit JPEG images
CN109587558B (zh) 视频处理方法、装置、电子设备以及存储介质
CN104335565A (zh) 采用具有自适应滤芯的细节增强滤波器的图像处理方法
Parihar et al. Fusion‐based simultaneous estimation of reflectance and illumination for low‐light image enhancement
CN111652830A (zh) 图像处理方法及装置、计算机可读介质及终端设备
CN113362246A (zh) 一种图像带状伪影去除方法、装置、设备和介质
CN111429370A (zh) 一种煤矿井下的图像增强方法、系统及计算机存储介质
WO2014070273A1 (en) Recursive conditional means image denoising
He et al. Fast single image dehazing via multilevel wavelet transform based optimization
CN113658085B (zh) 图像处理方法及装置
CN111080686A (zh) 用于自然场景中图像高光去除的方法
WO2020108060A1 (zh) 视频处理方法、装置、电子设备以及存储介质
Zhang et al. Constant time joint bilateral filtering using joint integral histograms
Sandoub et al. A low‐light image enhancement method based on bright channel prior and maximum colour channel
Ramos et al. Single image highlight removal for real-time image processing pipelines
Ke et al. Single underwater image restoration based on descattering and color correction

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23810659

Country of ref document: EP

Kind code of ref document: A1