CN117135329A - Image processing method, device, display equipment and storage medium - Google Patents

Image processing method, device, display equipment and storage medium Download PDF

Info

Publication number
CN117135329A
CN117135329A CN202211558732.3A CN202211558732A CN117135329A CN 117135329 A CN117135329 A CN 117135329A CN 202211558732 A CN202211558732 A CN 202211558732A CN 117135329 A CN117135329 A CN 117135329A
Authority
CN
China
Prior art keywords
image
pixel
target
brightness
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211558732.3A
Other languages
Chinese (zh)
Inventor
田其冲
谢岸煌
谢仁礼
吴有肇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen TCL New Technology Co Ltd
Original Assignee
Shenzhen TCL New Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen TCL New Technology Co Ltd filed Critical Shenzhen TCL New Technology Co Ltd
Priority to CN202211558732.3A priority Critical patent/CN117135329A/en
Publication of CN117135329A publication Critical patent/CN117135329A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/64Circuits for processing colour signals
    • H04N9/646Circuits for processing colour signals for image enhancement, e.g. vertical detail restoration, cross-colour elimination, contour correction, chrominance trapping filters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/21Circuitry for suppressing or minimising disturbance, e.g. moiré or halo

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Processing (AREA)

Abstract

The application provides an image processing method, an image processing device, display equipment and a storage medium, wherein a first image of a current time sequence and first brightness information of the first image are obtained, contrast enhancement processing is carried out on the first image to obtain a target image, second brightness information of the target image is obtained, and if the first brightness information is smaller than a first brightness threshold value and a brightness difference value between the first brightness information and the second brightness information is larger than a second brightness threshold value, noise reduction processing is carried out on the target image through a preset noise filter to obtain a reference image; further, classifying the target pixel according to the pixel values of the target image and the reference image on the target pixel to obtain the pixel type of the target pixel; if the pixel point type of the target pixel point is the noise pixel point, updating the pixel value of the target image on the target pixel point according to the pixel value of the reference image on the target pixel point. The method realizes contrast improvement on the first image and suppresses noise information in the image.

Description

Image processing method, device, display equipment and storage medium
Technical Field
The present application relates to the field of image technologies, and in particular, to an image processing method, an image processing device, a display device, and a storage medium.
Background
Contrast enhancement is one of the core technologies of television display technology and is widely used in various television products. The contrast is the measurement of different brightness levels between the bright and dark areas in the image, and the higher the contrast is, the richer the content level in the image is, the stronger the contrast and the vivid color contrast are; conversely, low contrast means that the image has less content gradation, and shows weak contrast and flat contrast. However, the contrast of the image is improved, and the noise in the image is enhanced, so that the image quality is reduced.
Disclosure of Invention
In view of the foregoing, it is desirable to provide an image processing method, apparatus, display device, and storage medium for reducing noise information in an image and suppressing side effects after contrast improvement.
In a first aspect, the present application provides an image processing method, the method comprising:
acquiring a first image of a current time sequence and first brightness information of the first image;
performing contrast enhancement processing on the first image to obtain a target image, and acquiring second brightness information of the target image;
If the first brightness information is smaller than the first brightness threshold value and the brightness difference value between the first brightness information and the second brightness information is larger than the second brightness threshold value, carrying out noise reduction treatment on the target image through a preset noise filter to obtain a reference image;
classifying the target pixel points according to the pixel values of the target image and the reference image on the target pixel points to obtain the pixel point types of the target pixel points;
if the pixel point type of the target pixel point is the noise pixel point, updating the pixel value of the target image on the target pixel point according to the pixel value of the reference image on the target pixel point.
In some embodiments of the present application, the step of performing contrast enhancement processing on the first image to obtain the target image includes:
acquiring a brightness histogram of a first image;
acquiring a target brightness scene type of the first image according to the brightness histogram of the first image and the first brightness information;
and carrying out contrast enhancement processing on the first image based on a dynamic contrast curve corresponding to the target brightness scene type to obtain a target image.
In some embodiments of the present application, before the step of obtaining the reference image, performing noise reduction processing on the target image by using a preset noise filter, the method further includes:
Acquiring Gaussian filter parameters corresponding to the target image according to the first brightness information, the second brightness information and the second brightness threshold;
and constructing a Gaussian filter by taking the Gaussian filter parameter as a standard deviation to obtain a noise filter corresponding to the target image.
In some embodiments of the present application, the step of performing contrast enhancement processing on the first image to obtain the target image includes:
acquiring a second image of the previous time sequence and third brightness information of the second image;
if the brightness difference value between the first brightness value information and the third brightness value information is smaller than the third brightness threshold value, acquiring a dynamic contrast curve corresponding to the second image;
and carrying out contrast enhancement processing on the first image based on the dynamic contrast curve to obtain a target image.
In some embodiments of the present application, before the step of obtaining the reference image, performing noise reduction processing on the target image by using a preset noise filter, the method further includes:
obtaining a low-pass filter coefficient corresponding to the second image;
and constructing a noise filter corresponding to the target image based on the low-pass filter coefficient.
In some embodiments of the present application, the step of classifying the target pixel according to the pixel values of the target image and the reference image on the target pixel to obtain the pixel type of the target pixel includes:
Acquiring a first pixel value on a target pixel point in a target image;
acquiring a second pixel value on a target pixel point in a reference image;
calculating a pixel difference between the first pixel value and the second pixel value;
and if the pixel difference value is in the preset pixel value area range, determining a target pixel point in the target image as a noise pixel point.
In some embodiments of the present application, the end point value of the pixel value area range includes a first pixel threshold value and a second pixel threshold value, wherein the first pixel threshold value is smaller than the second pixel threshold value;
after the step of calculating the pixel difference between the first pixel value and the second pixel value, the method further comprises:
if the pixel difference value is smaller than the first pixel threshold value, determining the target pixel point as a pixel point of a gentle region;
if the pixel difference value is larger than the second pixel threshold value, determining the target pixel point as a pixel point of the edge area;
and if the pixel difference value is larger than or equal to the first pixel threshold value and smaller than or equal to the second pixel threshold value, the target pixel point is a noise pixel point.
In a second aspect, the present application provides an image processing apparatus comprising:
the image acquisition module is used for acquiring a first image of the current time sequence and first brightness information of the first image;
The image enhancement module is used for carrying out contrast enhancement processing on the first image to obtain a target image and acquiring second brightness information of the target image;
the image noise reduction module is used for carrying out noise reduction processing on the target image through a preset noise filter to obtain a reference image when the first brightness information is smaller than a first brightness threshold value and the brightness difference value between the first brightness information and the second brightness information is larger than a second brightness threshold value;
the pixel point classification module is used for classifying the target pixel points according to the pixel values of the target image and the reference image on the target pixel points to obtain the pixel point types of the target pixel points;
and the pixel point correction module is used for updating the pixel value of the target image on the target pixel point according to the pixel value of the reference image on the target pixel point when the pixel point type of the target pixel point is the noise pixel point.
In a third aspect, the present application also provides a display apparatus comprising:
one or more processors;
a memory; and
one or more applications, wherein the one or more applications are stored in the memory and configured to be executed by the processor to implement the image processing method.
In a fourth aspect, the present application also provides a computer readable storage medium having stored thereon a computer program to be loaded by a processor for performing the steps of the image processing method.
According to the image processing method, the device, the display equipment and the storage medium, the first image of the current time sequence and the first brightness information of the first image are obtained, contrast enhancement processing is carried out on the first image to obtain the target image, the second brightness information of the target image is obtained, and if the first brightness information is smaller than the first brightness threshold value and the brightness difference value between the first brightness information and the second brightness information is larger than the second brightness threshold value, noise reduction processing is carried out on the target image through a preset noise filter to obtain the reference image; further, classifying the target pixel according to the pixel values of the target image and the reference image on the target pixel to obtain the pixel type of the target pixel; if the pixel point type of the target pixel point is the noise pixel point, updating the pixel value of the target image on the target pixel point according to the pixel value of the reference image on the target pixel point. After the contrast enhancement processing is carried out on the first image to obtain a target image, noise reduction processing is carried out on the brightness information of the target image to obtain a reference image, and then the pixel value of the noise pixel point on the target image is replaced by the pixel value of the reference image, so that the noise information in the image is suppressed while the image contrast is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic view of an image processing method according to an embodiment of the present application;
FIG. 2 is a flow chart of an image processing method according to an embodiment of the application;
FIG. 3 is a schematic diagram of images corresponding to different brightness scene types and dynamic contrast curves according to an embodiment of the present application;
FIG. 4 is a flowchart of the object size recognition model acquisition step in an embodiment of the present application;
fig. 5 is a schematic diagram of the structure of an image processing apparatus in the embodiment of the present application;
fig. 6 is a schematic diagram of a computer device in an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
In the description of the present application, the terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more of the described features. In the description of the present application, the meaning of "a plurality" is two or more, unless explicitly defined otherwise.
In the description of the present application, the word "for example" is used to mean "serving as an example, instance, or illustration. Any embodiment described as "for example" in this disclosure is not necessarily to be construed as preferred or advantageous over other embodiments. The following description is presented to enable any person skilled in the art to make and use the application. In the following description, details are set forth for purposes of explanation. It will be apparent to one of ordinary skill in the art that the present application may be practiced without these specific details. In other instances, well-known structures and processes have not been described in detail so as not to obscure the description of the application with unnecessary detail. Thus, the present application is not intended to be limited to the embodiments shown, but is to be accorded the widest scope consistent with the principles and features disclosed herein.
Contrast refers to measurement of different brightness levels between bright and dark areas in an image, and a picture displayed by a high-contrast image has strong bright and dark contrast and bright color contrast; in contrast, a low-contrast image shows a weak contrast and a flat contrast. Therefore, contrast enhancement is one of the core technologies of television display technology, and is widely used in various television products. However, in the case that the image is a dark scene, the contrast of the image is improved, and noise in the image is enhanced, for example, in the case that the image is relatively dark (i.e., the dark scene), the noise in the image may not be noticed, but after the contrast of the image is improved, the noise in the image is also enhanced and is obviously perceived. Based on the technical problem, the application provides an image processing method, after the contrast of an image is improved, whether the image scene corresponding to the image needs noise reduction processing is judged, so that the situation of noise enhancement caused by the improvement of the contrast is restrained.
The image processing method provided by the embodiment of the application can be applied to the image processing system shown in fig. 1. The image processing system includes a terminal 110 and a server 120, where the terminal 110 may be a computing device with a single-line display or a multi-line display, such as one of a television, a mobile phone, a tablet computer, a notebook computer, etc. The server 120 may be a stand-alone server, or may be a server network or a server cluster of servers, including but not limited to a computer, a network host, a single network server, a set of multiple network servers, or a cloud server of multiple servers. Wherein the Cloud server is composed of a large number of computers or web servers based on Cloud Computing (Cloud Computing).
It will be appreciated by those skilled in the art that the application environment shown in fig. 1 is merely an application scenario of the present application, and is not limited to the application scenario of the present application, and other application environments may also include more or fewer computer devices than those shown in fig. 1, for example, only 1 server 200 is shown in fig. 1, and it will be appreciated that the image processing system may also include one or more other servers, which is not limited herein. In addition, as shown in FIG. 1, the image processing system may also include a memory for storing data, such as storing image data.
It should be further noted that, the schematic view of the image processing system shown in fig. 1 is only an example, and the image processing system and the scene described in the embodiment of the present application are for more clearly describing the technical solution of the embodiment of the present application, and do not constitute a limitation on the technical solution provided by the embodiment of the present application, and those skilled in the art can know that, with the evolution of the image processing system and the appearance of a new service scene, the technical solution provided by the embodiment of the present application is equally applicable to similar technical problems.
Referring to fig. 2, an embodiment of the present application provides an image processing method, mainly for the terminal 110 in fig. 1, which includes steps S210 to S250, and is specifically as follows:
Step S210, acquiring a first image of a current time sequence and first brightness information of the first image.
The first image refers to unprocessed image information acquired by the terminal; more specifically, the first image may be image data of a certain time-series frame in the video data. The first luminance information may be an average pixel luminance value (average pixel level, APL) in the first image.
Specifically, the terminal acquires a first image of the current timing, and calculates an average pixel luminance value in the first image based on pixel data in the first image, which may be denoted as cAPL.
Step S220, contrast enhancement processing is performed on the first image to obtain a target image, and second brightness information of the target image is obtained.
The target image is image data obtained after the contrast of the first image is enhanced, and the second brightness information is average pixel brightness value in the target image.
After the first image is acquired, the first image may be subjected to a contrast enhancement process by using a dynamic contrast curve to acquire a corresponding target image, and an average pixel brightness value in the target image is calculated based on pixel data in the target image, which may be denoted as ccAPL. The dynamic contrast curve refers to a function curve for adjusting the contrast of an image, and may be, for example, a gamma curve.
Wherein the dynamic contrast curve may be determined based on a brightness scene type to which the first image belongs, and in one embodiment, the step of performing contrast enhancement processing on the first image to obtain the target image includes: acquiring a brightness histogram of a first image; acquiring a target brightness scene type of the first image according to the brightness histogram of the first image and the first brightness information; and carrying out contrast enhancement processing on the first image based on a dynamic contrast curve corresponding to the target brightness scene type to obtain a target image.
Wherein in the luminance histogram, the axis represents the luminance value in the first image, e.g. from left to right, gradually transitioning from full black to full white, and the vertical axis represents the relative number of pixels in the first image in this luminance range.
The brightness scene type is used for identifying the brightness level of the image, for example, the brightness scene type comprises a dark scene, a medium bright scene and a bright scene, and the image frames corresponding to the dark scene, the medium bright scene and the bright scene are sequentially bright; for another example, the three types of dark, medium, and bright scenes are further divided to determine nine luminance scene types. Referring to fig. 3, fig. 3 shows a schematic view of a bright scene type, a schematic view of a medium bright scene type, and a schematic view of a dark scene type, respectively, while fig. 3 shows dynamic contrast curves corresponding to different bright scenes, it can be understood that the darker the image frame corresponding to the bright scene type, the larger the radian of the dynamic contrast curve, i.e. the stronger the contrast enhancement effect.
Specifically, dynamic contrast curves corresponding to different brightness scene types can be preset; after the luminance histogram and the first luminance information of the first image are obtained, the first image can be classified according to the luminance histogram and the first luminance information of the first image so as to determine the luminance scene type of the first image; for example, in the luminance histogram of the first image, most of the pixels are distributed over the luminance interval [0,30], and the cAPL is smaller than 50, the target luminance scene type of the first image is considered as a dark scene. And finally, carrying out contrast enhancement processing on the first image based on a dynamic contrast curve corresponding to the target brightness scene type of the first image to obtain a target image.
Further, the dynamic contrast curve of the image under the previous time sequence can be used as the dynamic contrast curve of the image under the current time sequence; specifically, in one embodiment, the step of performing contrast enhancement processing on the first image to obtain the target image includes: acquiring a second image of the previous time sequence and third brightness information of the second image; if the brightness difference value between the first brightness value information and the third brightness value information is smaller than the third brightness threshold value, acquiring a dynamic contrast curve corresponding to the second image; and carrying out contrast enhancement processing on the first image based on the dynamic contrast curve to obtain a target image.
The second image is image data of the last time sequence corresponding to the first image; for example, the first image is image data at time t in the video stream, and the second image is image data at time (t-1) in the video stream. The third luminance information may be an average pixel luminance value in the second image. Specifically, the terminal acquires the second image of the last timing, and calculates the average pixel luminance value in the second image based on the pixel data in the second image, which may be denoted as pacl.
After the third luminance information of the second image is obtained, a luminance difference value between the first luminance information of the first image and the third luminance information of the second image may be calculated, and in particular, the luminance difference value may be represented by the following formula (1):
sAPL=|cAPL-pAPL| (1)
where sAPL represents the luminance difference value, cAPL represents the first luminance information of the first image, and pAPL represents the third luminance information of the second image.
The third brightness threshold is used for judging whether brightness information of any two images is similar or not; when the brightness difference value between the first brightness value information and the third brightness value information is larger than or equal to the third brightness threshold value, the brightness information of the first image and the second image can be considered to be dissimilar; when the brightness difference between the first brightness value information and the third brightness value information is smaller than the third brightness threshold value, the brightness information of the first image is similar to that of the second image, the dynamic contrast curve corresponding to the second image can be directly used for carrying out contrast enhancement processing on the first image, the dynamic contrast curve aiming at the first image does not need to be calculated and acquired again, and the acquisition efficiency of the target image is improved.
In step S230, if the first luminance information is smaller than the first luminance threshold and the luminance difference between the first luminance information and the second luminance information is larger than the second luminance threshold, the noise reduction processing is performed on the target image through the preset noise filter, so as to obtain the reference image.
The first brightness threshold value and the second brightness threshold value are used for judging whether filtering processing is needed to be carried out on the target image or not; specifically, in the case where the image frame is dark (for example, a dark scene), noise in the image may not be noticed, but after the contrast of the image is improved, the noise in the image is also enhanced, and the APL range of the dark scene is [0,30], the first luminance threshold value and the second luminance threshold value may be set according to the length of the range, for example, TH1 is 30, and TH2 may take the values of: th2= (30-0)/5=6. It can be understood that when the first luminance information is smaller than the first luminance threshold and the luminance difference between the first luminance information and the second luminance information is larger than the second luminance threshold, the target image is considered to need to be filtered, and at this time, a preset noise filter can be obtained to perform noise reduction on the target image; when the first luminance information is greater than or equal to the first luminance threshold value or the luminance difference between the first luminance information and the second luminance information is less than or equal to the second luminance threshold value, it is considered that the target image does not need to be subjected to filtering processing, and the target image can be output as final image data.
The noise filter is used for denoising the target image; and carrying out noise reduction processing on the target image through a noise filter to obtain a reference image.
Specifically, the noise filter may be acquired based on the first luminance information of the first image and the second luminance information of the target image; in one embodiment, before the step of performing noise reduction processing on the target image by using a preset noise filter to obtain the reference image, the method further includes: acquiring Gaussian filter parameters corresponding to the target image according to the first brightness information, the second brightness information and the second brightness threshold; and constructing a Gaussian filter by taking the Gaussian filter parameter as a standard deviation to obtain a noise filter corresponding to the target image.
The method for acquiring the Gaussian filter parameters is shown in the following formula (2):
where σ denotes a gaussian filter parameter, ccAPL denotes second luminance information of the target image, cpapl denotes first luminance information of the first image, and TH2 denotes a second luminance threshold. It will be appreciated that the larger the gaussian filter parameter, i.e. the larger the difference between the picture of the first image and the picture of the object, i.e. the larger the difference between the picture before and after contrast enhancement, the more and the stronger the noise may be present in the image of the object, and a noise filter with better noise reduction effect is required.
After the gaussian filter parameter is obtained, a gaussian filter with standard deviation of 3 times 3 can be constructed as a noise filter corresponding to the target image. Taking the example that the Gaussian filter parameter is equal to 0.5, the constructed Gaussian filter is shown in the following formula (3):
further, when the first image at the current timing and the second image at the previous timing are images with luminance information close to each other, the noise filter of the second image at the previous timing may be used as the noise filter of the first image at the current timing. Specifically, in one embodiment, before the step of performing noise reduction processing on the target image by using a preset noise filter to obtain the reference image, the method further includes: obtaining a low-pass filter coefficient corresponding to the second image; and constructing a noise filter corresponding to the target image based on the low-pass filter coefficient.
Specifically, the second image of the previous time sequence and the third brightness information of the second image can be obtained, whether the first image and the second image are images with close brightness information or not is judged based on the brightness difference value between the first brightness value information of the first image and the third brightness value information of the second image, the first image of the current time sequence and the second image of the previous time sequence are images with close brightness information, a low-pass filter coefficient corresponding to the second image is obtained, and then a noise filter corresponding to the target image is constructed based on the low-pass filter coefficient.
Step S240, classifying the target pixel according to the pixel values of the target image and the reference image on the target pixel to obtain the pixel type of the target pixel.
The pixel points on the target image are sequentially determined as target pixel points, and the target pixel points on the target image can be classified through the difference value of the pixel values of the target image and the reference image on the target pixel points so as to determine whether the target pixel points are noise pixel points or not.
In one embodiment, the step of classifying the target pixel according to the pixel values of the target image and the reference image on the target pixel to obtain the pixel type of the target pixel includes: acquiring a first pixel value on a target pixel point in a target image; acquiring a second pixel value on a target pixel point in a reference image; calculating a pixel difference between the first pixel value and the second pixel value; and if the pixel difference value is in the preset pixel value area range, determining a target pixel point in the target image as a noise pixel point.
Specifically, a pixel value area range is set; after obtaining the pixel difference value between the first pixel value and the second pixel value, judging whether the pixel difference value falls into a pixel value area range, and if the pixel difference value falls into the pixel value area range, determining the target pixel point as a noise pixel point.
Further, in one embodiment, the endpoint values of the pixel value area range are a first pixel threshold value and a second pixel threshold value, where the first pixel threshold value is smaller than the second pixel threshold value, and after obtaining the pixel difference value between the first pixel value and the second pixel value, the target pixel point may be classified according to the pixel difference value, the first pixel threshold value and the second pixel threshold value, which is specifically as follows:
if the pixel difference value between the first pixel value and the second pixel value is smaller than the first pixel threshold value, the target pixel point is considered to be the pixel point of the gentle region;
if the pixel difference value between the first pixel value and the second pixel value is larger than the second pixel threshold value, the target pixel point is considered to be the pixel point of the edge area;
and if the pixel difference value between the first pixel value and the second pixel value is larger than or equal to the first pixel threshold value and smaller than or equal to the second pixel threshold value, the target pixel point is considered to be a noise pixel point.
Step S250, if the pixel type of the target pixel is noise pixel, updating the pixel value of the target image on the target pixel according to the pixel value of the reference image on the target pixel.
After the target pixel point is determined to be the noise pixel point, the pixel value of the target pixel point in the target image can be modified to be the pixel value of the reference image on the target pixel point, so that the noise information is deleted.
In the image processing method, a first image of a current time sequence and first brightness information of the first image are obtained, contrast enhancement processing is carried out on the first image to obtain a target image, second brightness information of the target image is obtained, and if the first brightness information is smaller than a first brightness threshold value and a brightness difference value between the first brightness information and the second brightness information is larger than a second brightness threshold value, noise reduction processing is carried out on the target image through a preset noise filter to obtain a reference image; classifying the target pixel points according to the pixel values of the target image and the reference image on the target pixel points to obtain the pixel point types of the target pixel points; if the pixel point type of the target pixel point is the noise pixel point, updating the pixel value of the target image on the target pixel point according to the pixel value of the reference image on the target pixel point. After the contrast enhancement processing is carried out on the first image to obtain a target image, noise reduction processing is carried out on the brightness information of the target image to obtain a reference image, and then the pixel value of the noise pixel point on the target image is replaced by the pixel value of the reference image, so that the noise information in the image is suppressed while the image contrast is improved.
The above image processing method is further described below, specifically, the image processing method includes the steps of:
step S401, acquiring a first image of a current time sequence and first brightness information of the first image.
Specifically, the terminal acquires a first image of the current timing, and calculates an average pixel luminance value in the first image based on pixel data in the first image, which may be denoted as cAPL.
Step S402, acquiring the second image of the previous time sequence and the third brightness information of the second image.
Specifically, the terminal acquires the second image of the last timing, and calculates the average pixel luminance value in the second image based on the pixel data in the second image, which may be denoted as pacl.
Step S403, judging whether the brightness difference between the first brightness value information and the third brightness value information is smaller than the third brightness threshold value, if so, executing step S340; if the brightness difference between the first brightness value information and the third brightness value information is greater than or equal to the third brightness threshold, step S370 is performed.
Specifically, calculating the absolute value sAPL of the difference between the first luminance information cpapl of the first image and the third luminance information pacl of the second image, that is, sapl= |cpapl-pacl|, if sAPL is smaller than the third luminance threshold value, the first image and the second image are images with continuous pictures, the luminance information between the first image and the second image is similar, and the relevant parameters corresponding to the second image can be used for acting on the first image, where the relevant parameters include a dynamic contrast curve, a low-pass filter coefficient, and the like; if the sAPL is greater than or equal to the third brightness threshold, the first image and the second image are non-picture continuous images, the brightness information difference between the first image and the second image is large, and the dynamic contrast curve and the low-pass filter coefficient corresponding to the first image are needed to be recalculated and acquired later.
Step S404, a dynamic contrast curve and a low-pass filter coefficient corresponding to the second image are obtained.
Step S405, performing contrast enhancement processing on the first image based on the dynamic contrast curve to obtain a target image, and acquiring second brightness information of the target image.
Step S406, constructing a noise filter corresponding to the target image based on the low-pass filter coefficient.
If the brightness information between the first image and the second image is similar, the related parameters corresponding to the second image can be used for acting on the first image, namely, the dynamic contrast curve corresponding to the second image is used for carrying out contrast enhancement processing on the first image, and meanwhile, a noise filter for carrying out noise reduction processing on the target image is constructed based on the low-pass filter coefficient corresponding to the second image.
Step S407, a luminance histogram of the first image is acquired.
In step S408, a target luminance scene type of the first image is obtained according to the luminance histogram of the first image and the first luminance information.
Step S409, performing contrast enhancement processing on the first image based on the dynamic contrast curve corresponding to the target brightness scene type to obtain a target image, and obtaining second brightness information of the target image.
If the brightness information between the first image and the second image is not similar, a dynamic contrast curve acting on the first image can be determined according to the target brightness scene type of the first image.
After the first image is contrast-enhanced based on the dynamic contrast curve, a target image may be obtained, the target image may be denoted as img_1, and the second luminance information of the target image may be denoted as ccAPL.
In step S410, if the first luminance information is smaller than the first luminance threshold and the luminance difference between the first luminance information and the second luminance information is larger than the second luminance threshold, the gaussian filter parameters corresponding to the target image are obtained according to the first luminance information, the second luminance information and the second luminance threshold.
Specifically, if the condition cAPL < TH1 and |ccapl-capl| > TH2 is satisfied, a low-pass filter is required to perform a low-pass filter process on the target image to realize a noise reduction process. The low-pass filter may be a gaussian filter, and the method for obtaining the gaussian filter parameter of the gaussian filter is shown in the following formula (4):
where σ denotes a gaussian filter parameter, ccAPL denotes second luminance information of the target image, cpapl denotes first luminance information of the first image, and TH2 denotes a second luminance threshold. It will be appreciated that the larger the gaussian filter parameter, i.e. the larger the difference between the picture of the first image and the picture of the object, i.e. the larger the difference between the picture before and after contrast enhancement, the more and the stronger the noise may be present in the image of the object, and a noise filter with better noise reduction effect is required.
And S411, constructing a Gaussian filter by taking the Gaussian filter parameter as a standard deviation to obtain a noise filter corresponding to the target image.
After the gaussian filter parameter is obtained, a gaussian filter with standard deviation of 3 times 3 can be constructed as a noise filter corresponding to the target image. Taking the example that the Gaussian filter parameter is equal to 0.5, the constructed Gaussian filter is shown in the following formula (5):
in step S412, the noise reduction process is performed on the target image through a preset noise filter, so as to obtain a reference image.
Wherein the reference image after the noise reduction process may be denoted as img_2.
In step S413, the target pixel is classified according to the pixel values of the target image and the reference image on the target pixel, so as to obtain the pixel type of the target pixel.
After the target image img_1 and the reference image img_2 are determined, difference images before and after the low-pass filtering process, i.e., img_diff= |img_2-img_1|, may be acquired.
Further, by setting two pixel thresholds, denoted as a first pixel threshold TH4 and a first pixel threshold TH5, where TH4< TH5; if the pixel value corresponding to the pixel point img [ i, j ] (i.e. the pixel point of the ith row and the jth column) in the difference image is smaller than the first pixel threshold value TH4, i.e. img_diff [ i, j ] < TH4, the pixel point img [ i, j ] is considered to be the pixel point of the gentle region; if the pixel value corresponding to the pixel point img [ i, j ] in the difference image is larger than the second pixel threshold value TH5, namely img_diff [ i, j ] > TH5, the pixel point img [ i, j ] is considered to be a point of the edge area; if the pixel value corresponding to the pixel point img [ i, j ] in the difference image is larger than or equal to the first pixel threshold value and smaller than or equal to the second pixel threshold value TH5, that is, TH4< = img_diff [ i, j ] < = TH5, the pixel point img [ i, j ] is considered as a noise pixel point.
In step S414, if the pixel type of the target pixel is noise pixel, the pixel value of the target image on the target pixel is updated according to the pixel value of the reference image on the target pixel.
Specifically, a noise pixel point satisfying the condition TH4< = img_diff [ i, j ] < = TH5 is obtained, the pixel value of the noise pixel point in the target image is modified to be the pixel value of the reference image at the corresponding pixel point, namely, the pixel value of the pixel point satisfying the condition TH4< = img_diff [ i, j ] < = TH5 in the target image is modified to be img_2[ i, j ], and the pixel values of other pixel points are kept img_1[ i, j ].
Further, the value of the cpapl corresponding to the first image, the dynamic contrast curve, and the low pass filter coefficient (or gaussian filter coefficient) are recorded for processing of the corresponding image at the next time.
In order to better implement the image processing method provided by the embodiment of the present application, on the basis of the image processing method provided by the embodiment of the present application, an image processing apparatus is further provided in the embodiment of the present application, as shown in fig. 5, an image processing apparatus 500 includes:
an image obtaining module 510, configured to obtain a first image at a current time sequence and first luminance information of the first image;
the image enhancement module 520 is configured to perform contrast enhancement processing on the first image to obtain a target image, and obtain second brightness information of the target image;
The image noise reduction module 530 is configured to perform noise reduction processing on the target image through a preset noise filter to obtain a reference image when the first luminance information is less than the first luminance threshold and the luminance difference between the first luminance information and the second luminance information is greater than the second luminance threshold;
the pixel point classification module 540 is configured to classify the target pixel point according to the pixel values of the target image and the reference image on the target pixel point, so as to obtain a pixel point type of the target pixel point;
and the pixel point correction module 550 is configured to update the pixel value of the target image on the target pixel point according to the pixel value of the reference image on the target pixel point when the pixel point type of the target pixel point is the noise pixel point.
In some embodiments of the present application, the image enhancement module is configured to obtain a luminance histogram of the first image; acquiring a target brightness scene type of the first image according to the brightness histogram of the first image and the first brightness information; and carrying out contrast enhancement processing on the first image based on a dynamic contrast curve corresponding to the target brightness scene type to obtain a target image.
In some embodiments of the present application, the image noise reduction module is further configured to obtain a gaussian filter parameter corresponding to the target image according to the first luminance information, the second luminance information, and the second luminance threshold; and constructing a Gaussian filter by taking the Gaussian filter parameter as a standard deviation to obtain a noise filter corresponding to the target image.
In some embodiments of the present application, the image enhancement module is configured to obtain the second image of the previous time sequence and third luminance information of the second image; if the brightness difference value between the first brightness value information and the third brightness value information is smaller than the third brightness threshold value, acquiring a dynamic contrast curve corresponding to the second image; and carrying out contrast enhancement processing on the first image based on the dynamic contrast curve to obtain a target image.
In some embodiments of the present application, the image noise reduction module is further configured to obtain a low-pass filter coefficient corresponding to the second image; and constructing a noise filter corresponding to the target image based on the low-pass filter coefficient.
In some embodiments of the present application, a pixel classification module is configured to obtain a first pixel value on a target pixel in a target image; acquiring a second pixel value on a target pixel point in a reference image; calculating a pixel difference between the first pixel value and the second pixel value; and if the pixel difference value is in the preset pixel value area range, determining a target pixel point in the target image as a noise pixel point.
In some embodiments of the present application, the end point value of the pixel value area range includes a first pixel threshold value and a second pixel threshold value, wherein the first pixel threshold value is smaller than the second pixel threshold value; the pixel point classification module is used for determining the target pixel point as the pixel point of the gentle region if the pixel difference value is smaller than the first pixel threshold value; if the pixel difference value is larger than the second pixel threshold value, determining the target pixel point as a pixel point of the edge area; and if the pixel difference value is larger than or equal to the first pixel threshold value and smaller than or equal to the second pixel threshold value, the target pixel point is a noise pixel point.
In some embodiments of the application, the image processing apparatus 500 may be implemented in the form of a computer program which is executable on a computer device as shown in fig. 6. The memory of the computer device may store various program modules constituting the image processing apparatus 500, such as the image acquisition module 510, the image enhancement module 520, the image noise reduction module 530, the pixel classification module 540, and the pixel correction module 550 shown in fig. 5. The computer program constituted by the respective program modules causes the processor to execute the steps in the image processing method of the respective embodiments of the present application described in the present specification.
For example, the computer device shown in fig. 6 may perform step S210 through the image acquisition module 510 in the image processing apparatus 500 shown in fig. 5. The computer device may perform step S220 through the image enhancement module 520. The computer device may perform step S230 through the image denoising module 530. The computer device may perform step S240 through the pixel classification module 540. The computer device may execute step S250 through the pixel correction module 550. The computer device includes a processor, a memory, and a network interface connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The network interface of the computer device is used for communicating with an external computer device through a network connection. The computer program is executed by a processor to implement an image processing method.
It will be appreciated by those skilled in the art that the structure shown in FIG. 6 is merely a block diagram of some of the structures associated with the present inventive arrangements and is not limiting of the computer device to which the present inventive arrangements may be applied, and that a particular computer device may include more or fewer components than shown, or may combine some of the components, or have a different arrangement of components.
In some embodiments of the present application, a display device is provided that includes one or more processors; a memory; and one or more applications, wherein the one or more applications are stored in the memory and configured to be executed by the processor to perform the steps of the image processing method described above. The steps of the image processing method here may be the steps in the image processing methods of the respective embodiments described above.
In some embodiments of the present application, a computer-readable storage medium is provided, in which a computer program is stored, the computer program being loaded by a processor, so that the processor performs the steps of the above-mentioned image processing method. The steps of the image processing method here may be the steps in the image processing methods of the respective embodiments described above.
Those skilled in the art will appreciate that implementing all or part of the above-described embodiment methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in embodiments provided herein can include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, or the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory. By way of illustration, and not limitation, RAM can take many forms, such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), among others.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The foregoing has described in detail the methods, apparatuses, computer devices and storage medium for image processing according to the embodiments of the present application, and specific examples have been provided herein to illustrate the principles and embodiments of the present application, and the above description of the embodiments is only for aiding in the understanding of the methods and core ideas of the present application; meanwhile, as those skilled in the art will have variations in the specific embodiments and application scope in light of the ideas of the present application, the present description should not be construed as limiting the present application.

Claims (10)

1. An image processing method, the method comprising:
acquiring a first image of a current time sequence and first brightness information of the first image;
performing contrast enhancement processing on the first image to obtain a target image, and acquiring second brightness information of the target image;
if the first brightness information is smaller than a first brightness threshold value and the brightness difference value between the first brightness information and the second brightness information is larger than a second brightness threshold value, denoising the target image through a preset noise filter to obtain a reference image;
classifying the target pixel points according to the pixel values of the target image and the reference image on the target pixel points to obtain the pixel point types of the target pixel points;
And if the pixel point type of the target pixel point is a noise pixel point, updating the pixel value of the target image on the target pixel point according to the pixel value of the reference image on the target pixel point.
2. The method of claim 1, wherein the step of contrast enhancement processing the first image to obtain a target image comprises:
acquiring a brightness histogram of a first image;
acquiring a target brightness scene type of the first image according to the brightness histogram of the first image and the first brightness information;
and carrying out contrast enhancement processing on the first image based on the dynamic contrast curve corresponding to the target brightness scene type to obtain a target image.
3. The method according to claim 1, wherein before the step of obtaining the reference image by performing noise reduction processing on the target image through a preset noise filter, the method further comprises:
acquiring Gaussian filter parameters corresponding to the target image according to the first brightness information, the second brightness information and the second brightness threshold;
and constructing a Gaussian filter by taking the Gaussian filter parameter as a standard deviation to obtain a noise filter corresponding to the target image.
4. The method of claim 1, wherein the step of contrast enhancement processing the first image to obtain a target image comprises:
acquiring a second image of the previous time sequence and third brightness information of the second image;
if the brightness difference value between the first brightness value information and the third brightness value information is smaller than a third brightness threshold value, acquiring a dynamic contrast curve corresponding to the second image;
and carrying out contrast enhancement processing on the first image based on the dynamic contrast curve to obtain a target image.
5. The method according to claim 4, wherein before the step of obtaining the reference image by performing noise reduction processing on the target image through a preset noise filter, the method further comprises:
acquiring a low-pass filter coefficient corresponding to the second image;
and constructing a noise filter corresponding to the target image based on the low-pass filter coefficient.
6. The method according to claim 1, wherein the step of classifying the target pixel according to the pixel values of the target image and the reference image on the target pixel to obtain the pixel type of the target pixel includes:
Acquiring a first pixel value on a target pixel point in a target image;
acquiring a second pixel value on a target pixel point in a reference image;
calculating a pixel difference between the first pixel value and the second pixel value;
and if the pixel difference value is in the preset pixel value area range, determining a target pixel point in the target image as a noise pixel point.
7. The method of claim 6, wherein the endpoint value of the range of pixel values comprises a first pixel threshold and a second pixel threshold, wherein the first pixel threshold is less than the second pixel threshold;
after the step of calculating the pixel difference between the first pixel value and the second pixel value, the method further includes:
if the pixel difference value is smaller than the first pixel threshold value, determining the target pixel point as a pixel point of a gentle region;
if the pixel difference value is larger than the second pixel threshold value, determining the target pixel point as a pixel point of an edge area;
and if the pixel difference value is larger than or equal to the first pixel threshold value and smaller than or equal to the second pixel threshold value, the target pixel point is a noise pixel point.
8. An image processing apparatus, characterized in that the apparatus comprises:
The image acquisition module is used for acquiring a first image of the current time sequence and first brightness information of the first image;
the image enhancement module is used for carrying out contrast enhancement processing on the first image to obtain a target image and acquiring second brightness information of the target image;
the image noise reduction module is used for carrying out noise reduction processing on the target image through a preset noise filter to obtain a reference image when the first brightness information is smaller than a first brightness threshold value and the brightness difference value between the first brightness information and the second brightness information is larger than a second brightness threshold value;
the pixel point classification module is used for classifying the target pixel points according to the pixel values of the target image and the reference image on the target pixel points to obtain the pixel point types of the target pixel points;
and the pixel point correction module is used for updating the pixel value of the target image on the target pixel point according to the pixel value of the reference image on the target pixel point when the pixel point type of the target pixel point is the noise pixel point.
9. A display device, the display device comprising:
one or more processors;
A memory; and
one or more applications, wherein the one or more applications are stored in the memory and configured to be executed by the processor to implement the image processing method of any of claims 1 to 7.
10. A computer-readable storage medium, on which a computer program is stored, the computer program being loaded by a processor to perform the steps of the image processing method of any one of claims 1 to 7.
CN202211558732.3A 2022-12-06 2022-12-06 Image processing method, device, display equipment and storage medium Pending CN117135329A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211558732.3A CN117135329A (en) 2022-12-06 2022-12-06 Image processing method, device, display equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211558732.3A CN117135329A (en) 2022-12-06 2022-12-06 Image processing method, device, display equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117135329A true CN117135329A (en) 2023-11-28

Family

ID=88861585

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211558732.3A Pending CN117135329A (en) 2022-12-06 2022-12-06 Image processing method, device, display equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117135329A (en)

Similar Documents

Publication Publication Date Title
CN108335279B (en) Image fusion and HDR imaging
CN108694705B (en) Multi-frame image registration and fusion denoising method
CN109801240B (en) Image enhancement method and image enhancement device
CN112351195B (en) Image processing method, device and electronic system
US20220270266A1 (en) Foreground image acquisition method, foreground image acquisition apparatus, and electronic device
CN110866486A (en) Subject detection method and apparatus, electronic device, and computer-readable storage medium
CN111445487B (en) Image segmentation method, device, computer equipment and storage medium
CN115082350A (en) Stroboscopic image processing method and device, electronic device and readable storage medium
CN114429476A (en) Image processing method, image processing apparatus, computer device, and storage medium
CN111738944A (en) Image contrast enhancement method and device, storage medium and smart television
Liba et al. Sky optimization: Semantically aware image processing of skies in low-light photography
CN114998122A (en) Low-illumination image enhancement method
CN112218005B (en) Video editing method based on artificial intelligence
CN111539975B (en) Method, device, equipment and storage medium for detecting moving object
CN110136085B (en) Image noise reduction method and device
CN115278104B (en) Image brightness adjustment method and device, electronic equipment and storage medium
CN117135329A (en) Image processing method, device, display equipment and storage medium
CN116645527A (en) Image recognition method, system, electronic device and storage medium
CN116485645A (en) Image stitching method, device, equipment and storage medium
CN115239653A (en) Multi-split-screen-supporting black screen detection method and device, electronic equipment and readable storage medium
CN113438386B (en) Dynamic and static judgment method and device applied to video processing
CN112435188B (en) JND prediction method and device based on direction weight, computer equipment and storage medium
CN114245003A (en) Exposure control method, electronic device, and storage medium
CN111859022A (en) Cover generation method, electronic device and computer-readable storage medium
DE102018103652A1 (en) TILE REUSE IN PICTURE PRODUCTION

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination