CN111798393A - Image processing method and device, electronic device and storage medium - Google Patents

Image processing method and device, electronic device and storage medium Download PDF

Info

Publication number
CN111798393A
CN111798393A CN202010615414.0A CN202010615414A CN111798393A CN 111798393 A CN111798393 A CN 111798393A CN 202010615414 A CN202010615414 A CN 202010615414A CN 111798393 A CN111798393 A CN 111798393A
Authority
CN
China
Prior art keywords
image
pixel
pixel point
denoised
channel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202010615414.0A
Other languages
Chinese (zh)
Inventor
王东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen TetrasAI Technology Co Ltd
Original Assignee
Shenzhen TetrasAI Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen TetrasAI Technology Co Ltd filed Critical Shenzhen TetrasAI Technology Co Ltd
Priority to CN202010615414.0A priority Critical patent/CN111798393A/en
Publication of CN111798393A publication Critical patent/CN111798393A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/248Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics

Abstract

The application discloses an image processing method and device, electronic equipment and a storage medium. The method comprises the following steps: acquiring a first image to be denoised, wherein the first image to be denoised comprises first-class pixel points; performing downsampling processing on the first image to be denoised to obtain a first image, wherein the first image is a continuous image and comprises the first type of pixel points, and the ratio of the resolution of the first image to be denoised is greater than a first threshold; and carrying out noise reduction processing on the first image to obtain a second image.

Description

Image processing method and device, electronic device and storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image processing method and apparatus, an electronic device, and a storage medium.
Background
By performing noise reduction processing on the image, the quality of the image can be improved. Since the RAW image format (RAW) image is not processed, the RAW image carries more accurate and abundant information than an image obtained by processing the RAW image. Therefore, in order to improve the noise reduction effect, the current methods perform noise reduction processing on the RAW image. But the noise reduction effect of this method is still not good.
Disclosure of Invention
The application provides an image processing method and device, an electronic device and a storage medium.
In a first aspect, an image processing method is provided, the method comprising:
acquiring a first image to be denoised, wherein the first image to be denoised comprises first-class pixel points;
performing downsampling processing on the first image to be denoised to obtain a first image, wherein the first image is a continuous image and comprises the first type of pixel points, and the ratio of the resolution of the first image to be denoised is greater than a first threshold;
and carrying out noise reduction processing on the first image to obtain a second image.
In the aspect, the first image is obtained by performing the first downsampling processing on the first image to be denoised, so that the non-continuous image is converted into the continuous image, and the denoising processing can be performed on the continuous image. Because the ratio of the resolution of the first image to be registered is greater than 0.25, the image processing device can improve the noise reduction effect of the first channel of the first image to be noise-reduced by performing noise reduction processing on the first image.
With reference to any embodiment of the present application, the downsampling the first image to be noise-reduced to obtain a first image includes:
rotating the first image to be denoised by a first angle to obtain a third image, wherein the first angle is an odd multiple of 45 degrees;
magnifying the coordinate axis scale of the first pixel coordinate system by n times to obtain a second pixel coordinate system, wherein the first pixel coordinate system is the pixel coordinate system of the third image;
and determining the pixel value of each pixel point in the second pixel coordinate system according to the pixel value of the pixel point in the third image to obtain the first image.
In combination with any embodiment of the present application, the first type of pixel belongs to a first channel, the first to-be-denoised image further includes a second channel different from the first channel, and the method further includes:
extracting the second channel in the first image to be denoised to obtain a fourth image;
performing upsampling processing on the second image to obtain a fifth image, wherein the size of the fifth image is the same as that of the first image to be denoised;
and respectively taking the fourth image and the fifth image as a channel, and combining the fourth image and the fifth image to obtain a sixth image.
With reference to any embodiment of the present application, the performing upsampling processing on the second image to obtain a fifth image includes:
rotating the second image by a second angle to obtain a seventh image, wherein the second angle is a same angle of a final edge of a third angle, and the third angle is opposite to the first angle;
reducing the coordinate axis scale of a third pixel coordinate system by the factor of n to obtain a fourth pixel coordinate system, wherein the third pixel coordinate system is the pixel coordinate system of the seventh image;
and determining the pixel value of each pixel point in the fourth pixel coordinate system according to the pixel value of the pixel point in the seventh image to obtain the fifth image.
With reference to any embodiment of the present application, the first image to be noise-reduced includes: the first pixel point, the second pixel point, the third pixel point and the fourth pixel point;
the coordinate of the first pixel point is (i, j), the coordinate of the second pixel point is (i +1, j), the coordinate of the third pixel point is (i, j +1), the coordinate of the fourth pixel point is (i +1, j +1), wherein i and j are positive integers;
under the condition that the first pixel point is the first-class pixel point, the second pixel point and the third pixel point are not the first-class pixel point, and the fourth pixel point is the first-class pixel point;
and under the condition that the first pixel point is not the first-class pixel point, the second pixel point and the third pixel point are both the first-class pixel point, and the fourth pixel point is not the first-class pixel point.
In combination with any embodiment of the present application, the arrangement manner of the pixel points in the first image to be denoised is a bayer array.
In a second aspect, another image processing method is provided, and the method includes:
acquiring a second image to be denoised, wherein the second image to be denoised comprises second-class pixel points;
extracting a third channel in the second image to be denoised to obtain an eighth image third channel, wherein the third channel is a channel with the largest number of pixel points contained in the second image to be denoised;
performing downsampling processing on the eighth image to obtain a ninth image, wherein the ninth image is a continuous image and comprises the second type of pixel points, and the ratio of the resolution of the ninth image to the resolution of the second image to be denoised is greater than a second threshold;
and carrying out noise reduction processing on the ninth image to obtain a tenth image.
In this aspect, the conversion of the discontinuous image into the continuous image is realized by performing the second downsampling process on the eighth image to obtain the ninth image, and the noise reduction process may be performed on the continuous image. Because the ratio of the resolution of the ninth image to the resolution of the second image to be denoised is greater than 0.25, the image processing device can improve the denoising effect of the third channel of the second image to be denoised by denoising the eighth image.
With reference to any one of the embodiments of the present application, the performing a second downsampling process on the eighth image to obtain a ninth image includes:
rotating the eighth image by a fourth angle to obtain an eleventh image, wherein the fourth angle is an odd multiple of 45 degrees;
magnifying the coordinate axis scale of a fifth pixel coordinate system by m times to obtain a sixth pixel coordinate system, wherein the fifth pixel coordinate system is the pixel coordinate system of the eleventh image;
and determining the pixel value of each pixel point in the sixth pixel coordinate system according to the pixel value of the pixel point in the eleventh image to obtain the ninth image.
With reference to any one of the embodiments of the present application, the downsampling the eighth image to obtain a ninth image includes:
constructing a twelfth image, wherein the twelfth image comprises second-type pixel points of the second-type pixel points in the second image to be denoised;
and reducing the pixel value in the twelfth image by s times to obtain the ninth image.
With reference to any embodiment of the present application, a diagonal line of the second image to be denoised includes a first line segment;
the constructing the twelfth image comprises:
arranging at least one second-type pixel point with the center belonging to the same first diagonal line into a line of pixel points of the image according to the sequence of the abscissa from small to large to construct a thirteenth image, wherein the first diagonal line comprises: a line passing through the first line segment, a line parallel to the first line segment;
sequencing rows in the thirteenth image to obtain a twelfth image; or the like, or, alternatively,
arranging at least one second-type pixel point with the center belonging to the same first diagonal line into a column of pixel points of the image according to the sequence of the abscissa from small to large to construct a fourteenth image, wherein the first diagonal line comprises: a line passing through the first line segment, a line parallel to the first line segment;
and sequencing columns in the fourteenth image to obtain the twelfth image.
With reference to any embodiment of the present application, the sorting rows in the thirteenth image to obtain the twelfth image includes:
determining a first mean value of the ordinate of each row of pixel points in the thirteenth image, and obtaining a first index according to the first mean value, wherein the first mean value and the first index are in positive correlation or negative correlation;
and arranging the rows in the thirteenth image according to the sequence of the first indexes from large to small to obtain the twelfth image.
In combination with any embodiment of the present application, the diagonal line of the second image to be denoised further includes a second line segment different from the first line segment;
the sorting the rows in the thirteenth image to obtain the twelfth image comprises:
arranging rows in the thirteenth image according to a first sequence to obtain the twelfth image, wherein the first sequence is a sequence from the great ordinate of the first index pixel point to the small ordinate, and the first sequence or the sequence from the small ordinate of the first index pixel point to the large ordinate includes a pixel point whose center belongs to a first straight line;
under the condition that the second line segment passes through the center of the second type pixel point, the first straight line is a straight line passing through the second line segment;
and under the condition that the second line segment does not exceed the center of the second-type pixel point, the first straight line is the straight line second line segment which is parallel to the second line segment and is closest to the second line segment in the straight lines which pass through the center of the second-type pixel point.
With reference to any embodiment of the present application, the sorting the columns in the fourteenth image to obtain the second image includes:
determining a second average value of the ordinate of each row of pixel points in the fourteenth image, and obtaining a second index according to the second average value, wherein the second average value and the second index are in positive correlation or negative correlation;
and arranging the columns in the fourteenth image according to the descending order of the second index to obtain the twelfth image.
In combination with any embodiment of the present application, the diagonal line of the second image to be denoised further includes a second line segment different from the first line segment;
the sorting the columns in the fourteenth image to obtain the twelfth image comprises:
arranging the rows in the fourteenth image according to a second sequence to obtain the twelfth image, wherein the second sequence is the sequence from the great ordinate of the second index pixel point to the small ordinate, and the second sequence or the sequence from the small ordinate of the second index pixel point to the large ordinate includes a pixel point whose center belongs to a second straight line;
under the condition that the second line segment passes through the center of the second type pixel point, the second straight line is a straight line passing through the second line segment;
and when the second line segment does not exceed the center of the second-type pixel point, the second straight line is the straight line which is parallel to the second line segment and is closest to the second line segment from among straight lines which pass through the center of the second-type pixel point.
With reference to any embodiment of the present application, the second image to be noise-reduced further includes a fourth channel different from the third channel, and the method further includes:
extracting the fourth channel in the second image to be denoised to obtain a fifteenth image;
performing upsampling processing on the tenth image to obtain a sixteenth image, wherein the size of the sixteenth image is the same as that of the second image to be denoised;
and respectively taking the fifteenth image and the sixteenth image as a channel, and combining the fifteenth image and the sixteenth image to obtain a seventeenth image.
In combination with any embodiment of the present application, in a case that claim 8 is included in a claim cited in claim 15, the performing the upsampling process on the tenth image to obtain a sixteenth image includes:
rotating the tenth image by a fifth angle to obtain an eighteenth image, wherein the fifth angle is a same angle of a final edge of a sixth angle, and the sixth angle and the fourth angle are opposite numbers;
reducing the coordinate axis scale of a seventh pixel coordinate system by m times to obtain an eighth pixel coordinate system, wherein the seventh pixel coordinate system is the pixel coordinate system of the eighteenth image;
and determining the pixel value of each pixel point in the seventh pixel coordinate system according to the pixel value of the pixel point in the eighteenth image to obtain the sixteenth image.
In combination with any embodiment of the present application, the second image to be noise-reduced includes: a fifth pixel point, a sixth pixel point, a seventh pixel point and an eighth pixel point;
the coordinates of the fifth pixel point are (p, q), the coordinates of the sixth pixel point are (p +1, q), the coordinates of the seventh pixel point are (p, q +1), the coordinates of the eighth pixel point are (p +1, q +1), and both p and q are positive integers;
under the condition that the fifth pixel point is the second-type pixel point, the sixth pixel point and the seventh pixel point are not the second-type pixel point, and the eighth pixel point is the second-type pixel point;
and under the condition that the fifth pixel point is not the second-type pixel point, the sixth pixel point and the seventh pixel point are both the second-type pixel point, and the eighth pixel point is not the second-type pixel point.
In combination with any embodiment of the present application, the arrangement manner of the pixel points in the second to-be-denoised image is a bayer array.
In a third aspect, there is provided an image processing apparatus, the apparatus comprising:
the device comprises a first obtaining unit, a second obtaining unit and a processing unit, wherein the first obtaining unit is used for obtaining a first image to be denoised, and the first image to be denoised comprises first type pixel points;
the first processing unit is used for performing downsampling processing on the first image to be denoised to obtain a first image, wherein the first image is a continuous image and comprises the first type of pixel points, and the ratio of the resolution of the first image to be denoised is greater than a first threshold;
and the second processing unit is used for carrying out noise reduction processing on the first image to obtain a second image.
In a third aspect, there is provided another image processing apparatus, including:
the second acquiring unit is used for acquiring a second image to be denoised, wherein the second image to be denoised comprises second-type pixel points;
the second extraction unit is used for extracting a third channel in the second image to be denoised to obtain an eighth image third channel, wherein the third channel is a channel with the largest number of pixel points contained in the second image to be denoised;
a fourth processing unit, configured to perform downsampling on the eighth image to obtain a ninth image, where the ninth image is a continuous image, the ninth image includes the second type of pixel points, and a ratio of a resolution of the ninth image to a resolution of the second to-be-denoised image is greater than a second threshold;
and the fifth processing unit is used for carrying out noise reduction processing on the ninth image to obtain a tenth image.
In a fifth aspect, a processor is provided, which is configured to perform the method of the first aspect and any one of the possible implementations thereof.
In a sixth aspect, an electronic device is provided, comprising: a processor, transmitting means, input means, output means, and a memory for storing computer program code comprising computer instructions, which, when executed by the processor, cause the electronic device to perform the method of the first aspect and any one of its possible implementations.
In a seventh aspect, a computer-readable storage medium is provided, in which a computer program is stored, the computer program comprising program instructions that, if executed by a processor, cause the processor to perform the method according to the first aspect and any one of the possible implementation manners thereof.
In an eighth aspect, there is provided a computer program product comprising a computer program or instructions which, when run on a computer, causes the computer to perform the method of the first aspect and any of its possible implementations.
In a ninth aspect, a processor is provided for performing the method of the second aspect and any one of its possible implementations.
In a tenth aspect, there is provided an electronic device comprising: a processor, transmitting means, input means, output means, and a memory for storing computer program code comprising computer instructions, which, when executed by the processor, cause the electronic device to perform the method of the second aspect and any one of its possible implementations.
In an eleventh aspect, there is provided a computer-readable storage medium having stored therein a computer program comprising program instructions which, if executed by a processor, cause the processor to perform the method of the second aspect and any one of its possible implementations.
In a twelfth aspect, a computer program product is provided, which comprises a computer program or instructions, which, if run on a computer, causes the computer to perform the method of the second aspect and any possible implementation thereof.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments or the background art of the present application, the drawings required to be used in the embodiments or the background art of the present application will be described below.
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and, together with the description, serve to explain the principles of the application.
Fig. 1 is a RAW image provided in an embodiment of the present application;
fig. 2 is another RAW image provided in the embodiment of the present application;
fig. 3a is a RAW image according to an embodiment of the present application;
fig. 3b is an image obtained by performing 0.5-fold down-sampling on the RAW image shown in fig. 3a according to an embodiment of the present application;
fig. 4 is a schematic diagram of a pixel coordinate system according to an embodiment of the present disclosure;
fig. 5 is a schematic flowchart of an image processing method according to an embodiment of the present application;
fig. 6 is a schematic diagram of performing a first downsampling process on a first image to be denoised according to an embodiment of the present disclosure;
FIG. 7 is a first image provided by an embodiment of the present application;
FIG. 8a is a schematic diagram of a diagonal array according to an embodiment of the present application;
FIG. 8b is a schematic diagram of another diagonal array provided in an embodiment of the present application;
FIG. 9a is a schematic diagram of another diagonal array provided in an embodiment of the present application;
FIG. 9b is a schematic diagram of another diagonal array provided in an embodiment of the present application;
fig. 10 is a schematic flowchart of another image processing method according to an embodiment of the present application;
fig. 11 is a first image to be denoised according to an embodiment of the present application;
FIG. 12 is a third image provided by an embodiment of the present application;
fig. 13 is a schematic diagram of a scale of an enlarged first pixel coordinate system according to an embodiment of the present disclosure;
fig. 14 is a first image obtained based on the coordinate system shown in fig. 13 according to an embodiment of the present disclosure;
fig. 15 is a schematic flowchart of another image processing method according to an embodiment of the present application;
fig. 16a is a schematic diagram of a first image to be denoised according to an embodiment of the present application;
fig. 16b is a schematic diagram of a fourth image obtained by extracting a green channel from the first image to be noise-reduced shown in fig. 16a according to an embodiment of the present disclosure;
FIG. 17a is a schematic diagram of an image of a green color channel according to an embodiment of the present disclosure;
fig. 17b is a schematic diagram of an image obtained by upsampling the image shown in fig. 17a according to an embodiment of the present application;
fig. 18 is a schematic flowchart of another image processing method according to an embodiment of the present application;
FIG. 19a is a schematic diagram of a second image according to an embodiment of the present disclosure;
fig. 19b is a schematic diagram of a seventh image obtained by rotating the second image shown in fig. 19a according to an embodiment of the present disclosure;
fig. 19c is a schematic diagram of a fifth image obtained based on the seventh image shown in fig. 19b according to an embodiment of the present disclosure;
fig. 20 is a schematic flowchart of another image processing method according to an embodiment of the present application;
fig. 21a is a schematic diagram of a second image to be denoised according to an embodiment of the present disclosure;
fig. 21b is a schematic diagram of an eighth image obtained by extracting a green channel from the second image to be denoised shown in fig. 21a according to the embodiment of the present application;
fig. 22 is a schematic diagram of performing a second downsampling process on an eighth image according to an embodiment of the present application;
fig. 23 is a ninth image provided in the embodiment of the present application;
FIG. 24a is a schematic diagram of a diagonal array according to an embodiment of the present application;
FIG. 24b is a schematic view of another diagonal array provided by an embodiment of the present application;
FIG. 25a is a schematic view of another diagonal array provided in an embodiment of the present application;
FIG. 25b is a schematic view of another diagonal array provided by an embodiment of the present application;
fig. 26 is a schematic flowchart of another image processing method according to an embodiment of the present application;
fig. 27 is an eighth image provided in the embodiments of the present application;
FIG. 28 is an eleventh image provided in accordance with embodiments of the present application;
FIG. 29 is a schematic diagram of a scale for magnifying a fifth pixel coordinate system according to an embodiment of the present disclosure;
fig. 30 is a ninth image obtained based on the pixel coordinate system shown in fig. 29 according to an embodiment of the present application;
fig. 31 is a schematic flowchart of another image processing method according to an embodiment of the present application;
fig. 32 is another second image to be denoised according to the embodiment of the present application;
fig. 33 is a twelfth image provided in the embodiment of the present application;
FIG. 34 is an alternative twelfth image provided in accordance with embodiments of the present application;
FIG. 35 is a twelfth alternative image provided in accordance with embodiments of the present application;
FIG. 36 is a first intermediate image provided by an embodiment of the present application;
FIG. 37 is another first intermediate image provided in an embodiment of the present application;
fig. 38a is another second image to be denoised according to the embodiment of the present application;
FIG. 38b is a thirteenth image provided in accordance with embodiments of the present application;
fig. 39a is a twelfth image provided in the present application;
FIG. 39b is a twelfth image provided in accordance with embodiments of the present application;
fig. 40a is a diagram of another second image to be denoised according to an embodiment of the present application;
FIG. 40b is a fourteenth image according to an embodiment of the present disclosure;
FIG. 41a is a twelfth image according to an embodiment of the present disclosure;
FIG. 41b is a twelfth image according to an embodiment of the present disclosure;
fig. 42 is a schematic flowchart of another image processing method according to an embodiment of the present application;
fig. 43a is a schematic diagram of a second image to be denoised according to an embodiment of the present disclosure;
fig. 43b is a schematic diagram of a fifteenth image obtained by extracting a green channel from the second image to be noise-reduced shown in fig. 43a according to an embodiment of the present application;
FIG. 44a is a schematic diagram of an image of a green color channel according to an embodiment of the present disclosure;
fig. 44b is a schematic diagram of an image obtained by upsampling the image shown in fig. 44a according to an embodiment of the present application;
fig. 45 is a schematic flowchart of another image processing method according to an embodiment of the present application;
fig. 46a is a schematic diagram of a tenth image according to an embodiment of the present application;
FIG. 46b is a schematic diagram of an eighteenth image obtained by rotating the tenth image shown in FIG. 46a according to an embodiment of the present disclosure;
fig. 46c is a schematic diagram of a sixteenth image obtained based on the eighteenth image shown in fig. 46b according to an embodiment of the present application;
fig. 47 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application;
fig. 48 is a schematic structural diagram of another image processing apparatus according to an embodiment of the present application;
fig. 49 is a schematic diagram of a hardware structure of an image processing apparatus according to an embodiment of the present application;
fig. 50 is a schematic diagram of a hardware structure of another image processing apparatus according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions of the present application better understood, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "first," "second," and the like in the description and claims of the present application and in the above-described drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
It should be understood that in the present application, "at least one" means one or more, "a plurality" means two or more, "at least two" means two or three and three or more, "and/or" for describing an association relationship of associated objects, meaning that three relationships may exist, for example, "a and/or B" may mean: only A, only B and both A and B are present, wherein A and B may be singular or plural. The character "/" may indicate that the objects associated with each other are in an "or" relationship, meaning any combination of the items, including single item(s) or multiple items. For example, at least one (one) of a, b, or c, may represent: a, b, c, "a and b", "a and c", "b and c", or "a and b and c", wherein a, b, c may be single or plural. The character "/" may also represent a division in a mathematical operation, e.g., a/b-a divided by b; 6/3 ═ 2. At least one of the following "or similar expressions.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
By performing noise reduction processing on the image, the quality of the image can be improved. Because the original (RAW) image is not processed, compared with an image obtained by processing the RAW image, the information carried by the RAW image is more accurate and richer in information quantity, and therefore, the image quality is more favorably improved by performing noise reduction processing on the RAW image.
Since human eyes have different sensitivities to different colors, in a case where a RAW image includes at least two color channels, in order to allow the human eyes to obtain better visual perception and more information by observing the RAW image, in the RAW image, a color channel (hereinafter, referred to as a sensitivity channel) to which the human eyes are most sensitive generally includes the most pixel points. For example, the sensitivity of the human eye to green is higher than the sensitivity of the human eye to red or the sensitivity of the human eye to blue, and therefore, in the case where the RAW image includes three channels of R (R in this case means red), G (G in this case means green), and B (B in this case means blue), the number of pixel points included in the G channel is the largest. For another example, the sensitivity of the human eye to yellow is higher than the sensitivity of the human eye to red or the sensitivity of the human eye to blue, and therefore, in the case where the RAW image includes three channels of Y (Y in this text means yellow) and G, B, the number of pixel points included in the Y channel is the largest.
Since the human eye has the highest sensitivity to the color channel containing the largest number of pixel points among all the color channels of the RAW image, the noise reduction effect on the RAW image mainly depends on the noise reduction effect of the sensitive channel. Therefore, the sensitive channel in the RAW image can be extracted to obtain the sensitive channel image, and the noise reduction processing of the RAW image is realized by performing the noise reduction processing on the sensitive channel image.
And (4) carrying out noise reduction processing on the pixel points in the image, wherein information carried by the adjacent pixel points of the pixel points needs to be utilized. In the RAW image, the pixels are usually arranged in a bayer pattern (bayer pattern), that is, the RAW image includes three channels of red, green, and blue. And the meanings represented by the pixel values of the pixels of different channels are different, so that in the process of carrying out noise reduction processing on the RAW image, the information carried by the pixels adjacent to the pixels can not be fully utilized, the noise of the pixels is reduced, and the effect of the noise reduction processing is further reduced. In the embodiment of the application, the noise reduction effect is used for representing the quality of an image obtained by noise reduction processing. Specifically, the noise reduction effect is good, and the quality of an image obtained by representing noise reduction processing is high; the noise reduction effect is poor, and the quality of an image obtained by representing noise reduction processing is low.
For example, in the RAW image shown in fig. 1, pixel a is associated with22The adjacent pixel points include: pixel point A11Pixel point A12Pixel point A13Pixel point A21Pixel point A23Pixel point A31Pixel point A32Pixel point A33. Pixel point A22Is a pixel of the R channel, but pixel A11Pixel point A12Pixel point A13Pixel point A21Pixel point A23Pixel point A31Pixel point A32Pixel point A33Are not pixel points of the R channel. Therefore, at the pixel point A22In the process of noise reduction processing, the pixel point A can not be utilized22Information of adjacent pixels.
For another example, in the RAW image shown in fig. 2, pixel point a is associated with22The adjacent pixel points include: pixel point A11Pixel point A12Pixel point A13Pixel point A21Pixel point A23Pixel point A31Pixel point A32Pixel point A33. Pixel point A22Is a pixel of the R channel, but pixel A12Pixel point A21Pixel point A23Pixel point A32Are not pixel points of the R channel. Thus, in pairPixel point A22In the process of noise reduction processing, the pixel point A cannot be utilized12Carried information, pixel point A21Carried information, pixel point A23Carried information, pixel point A32The information carried.
In the traditional method, downsampling processing is performed on a RAW image, so that pixel points in the RAW image belong to sensitive channels, and then noise reduction processing can be performed on two RAW images subjected to downsampling processing. Since the resolution of the RAW image is reduced by the downsampling process, the noise reduction effect of the sensitive channel is deteriorated by performing the noise reduction process on the image obtained by the downsampling process. Specifically, the larger the downsampling multiplying factor is, the better the noise reduction effect is, and on the contrary, the smaller the downsampling multiplying factor is, the worse the noise reduction effect is.
In the embodiment of the present application, the down-sampling magnification in the down-sampling process is equal to the length of the image after the down-sampling process/the length of the image before the down-sampling process is equal to the width of the image after the down-sampling process/the width of the image before the down-sampling process. For example, the size of the RAW image shown in fig. 3a is 4 × 4, and the down-sampling process of 0.5 times is performed on the image, so that an image with the size of 2 × 2 shown in fig. 3b can be obtained. In the image shown in fig. 3B, each pixel includes two pixels of the G channel, one pixel of the B channel, and one pixel of the R channel. Such as: pixel point B11Includes a pixel point A11Pixel point A12Pixel point A21Pixel point A22Pixel point B12Includes a pixel point A13Pixel point A14Pixel point A23Pixel point A24Pixel point B21Includes a pixel point A31Pixel point A32Pixel point A41Pixel point A42Pixel point B22Includes a pixel point A33Pixel point A34Pixel point A43Pixel point A44
Obviously, in the conventional image noise reduction method, the maximum value of the downsampling magnification is 0.5. That is, in the conventional image noise reduction method, the best noise reduction effect is obtained by performing downsampling processing with a downsampling magnification of 0.5 on an image to be noise reduced.
The embodiment of the application provides an image denoising method, which can improve the denoising effect of a sensitive channel in a RAW image on the premise of denoising two RAW images.
The execution subject of the embodiment of the present application is an image processing apparatus, and optionally, the image processing apparatus may be one of the following: cell-phone, computer, server, panel computer. The embodiments of the present application will be described below with reference to the drawings.
Before proceeding with the following explanation, the pixel coordinate system in the embodiment of the present application is first defined. The pixel coordinate system in the embodiment of the application is used for representing the position of the pixel point in the image, wherein the abscissa is used for representing the column number of the pixel point, and the ordinate is used for representing the row number of the pixel point. For example, in the image shown in fig. 4, a pixel coordinate system XOY is constructed with the upper left corner of the image as the origin O of coordinates, the direction parallel to the rows of the image as the direction of the X axis, and the direction parallel to the columns of the image as the direction of the Y axis. The units of the abscissa and the ordinate are pixel points. For example, pixel A in FIG. 411Has the coordinate of (1, 1), and the pixel point A23Has the coordinates of (3, 2), and the pixel point A42Has the coordinates of (2, 4), and the pixel point A34The coordinates of (2) are (4, 3).
Referring to fig. 5, fig. 5 is a schematic flowchart of an image denoising method according to an embodiment of the present disclosure.
501. And acquiring a first image to be denoised.
In the embodiment of the application, both the first image to be denoised and the second image to be denoised are RAW images, and the number of channels in the first image to be denoised is greater than or equal to 2. For example, the first image to be noise-reduced contains R, G two channels. For another example, the first image to be noise-reduced contains R, G, B three channels. As another example, the first image to be noise-reduced contains R, Y, B three channels.
Because human eyes have different sensitivities to different colors, in the case that the RAW image includes at least two color channels, in order to facilitate human eyes to obtain better visual perception and more information by observing the RAW image, the color channel to which the human eyes are most sensitive generally includes the most pixel points in the RAW image. For example, the sensitivity of the human eye to green is higher than the sensitivity of the human eye to red or the sensitivity of the human eye to blue, and therefore, in the case where the RAW image includes R, G, B three channels, the G channel includes the largest number of pixel points. For another example, since the sensitivity of the human eye to yellow is higher than the sensitivity of the human eye to red or the sensitivity of the human eye to blue, when the RAW image includes Y, G, B three channels, the number of pixels included in the Y channel is the largest.
The channel with the largest number of pixel points in the first image to be denoised is called a first channel. For example, in a case where the first image to be noise-reduced includes R, G, B three channels and the G channel is a channel including the largest number of pixel points, the first channel is the G channel.
It should be understood that, in the case that the first image to be noise-reduced includes two channels, and the number of the pixel points of each channel is equal, the first channel may be any one of the channels in the first image to be noise-reduced. For example, in the first image to be denoised, the number of pixels of the R channel is: the number of pixels of the G channel is 1:1, and the first channel may be an R channel or a G channel.
In this embodiment of the application, the first to-be-denoised image includes first-type pixels, where the first-type pixels include pixels belonging to the first channel. For example, the first channel is a G channel, and the first image to be denoised includes: the device comprises a pixel point a, a pixel point b, a pixel point c and a pixel point d, wherein the pixel point a and the pixel point c belong to a G channel. At this time, the first type of pixel points include: pixel a, pixel c, pixel e and pixel g.
In one implementation of acquiring the first image to be denoised, the image processing apparatus receives the first image to be denoised input by the user through the input component. The above-mentioned input assembly includes: keyboard, mouse, touch screen, touch pad, audio input device, etc.
In another implementation manner of acquiring the first image to be denoised, the image processing apparatus receives the first image to be denoised sent by the first terminal. Optionally, the first terminal may be any one of the following: cell-phone, computer, panel computer, server, wearable equipment.
In another implementation manner of acquiring the first image to be denoised, the image processing apparatus may acquire the first image to be denoised through the imaging component. Optionally, the imaging component may be a camera.
502. And carrying out downsampling processing on the first image to be denoised to obtain a first image.
Before proceeding to the following explanation, successive images are defined. In the embodiment of the present application, the continuous image means that all the pixel points belong to the same channel, and for convenience of description, the images except the continuous image are hereinafter referred to as non-continuous images. For example, the first image to be noise-reduced shown in fig. 6 is a discontinuous image, and the first image shown in fig. 7 is a continuous image.
It should be understood that the consecutive images may contain filler pixels, such as the consecutive image shown in fig. 7 containing the second filler pixels. If the pixel points except the filling pixel points in the continuous image are called channel pixel points, no filling pixel point exists between any two adjacent channel pixel points in the continuous image.
In the embodiment of the application, the first image is a continuous image and the first image comprises first-type pixel points.
In the embodiment of the present application, a ratio of the resolution of the first image to be denoised is greater than a first threshold. Optionally, the first threshold is 0.25.
If the down-sampling process performed on the first image to be noise-reduced is referred to as a first down-sampling process, the down-sampling magnification of the first down-sampling process is greater than 0.5.
In a possible implementation manner, the first image to be denoised is an image matrix, and the shapes of the pixels in the image matrix are all squares. The shape of the first downsampling window of the first downsampling process is also square, the center of the first downsampling window is the same as the center of the first type of pixel points, the center of the first downsampling window is the intersection point of two diagonal lines of the downsampling window, and the center of the first type of pixel points is the intersection point of the two diagonal lines of the first type of pixel points. The area of the first down-sampling window is larger than that of the first type of pixel points, and the top point of the first type of pixel points is located on the boundary of the first down-sampling window.
And dividing the first image to be denoised into at least one pixel point region through at least one first downsampling window. And taking each pixel point region as a pixel point, and determining the pixel value of the pixel point corresponding to the pixel point region according to the pixel value in each pixel point region, so as to realize the first down-sampling processing of the first image to be denoised.
For example, the first image shown in fig. 7 can be obtained by performing the first downsampling process on the first image to be noise-reduced shown in fig. 6. Suppose that in the first to-be-denoised image shown in fig. 6, pixel point B is11Has a center of C1Pixel point G12Has a center of C2Pixel point B13Has a center of C3Pixel point G14Has a center of C4Pixel point G21Has a center of C5Pixel point R22Has a center of C6Pixel point G23Has a center of C7Pixel point R24Has a center of C8Pixel point B31Has a center of C9Pixel point G32Has a center of C10Pixel point B33Has a center of C11Pixel point G34Has a center of C12Pixel point G41Has a center of C13Pixel point R42Has a center of C14Pixel point G43Has a center of C15Pixel point R44Has a center of C16
First downsampling window TC1C6C9(hereinafter, will be referred to as a first downsampling window 1) has a center of C7The area of the first lower sampling window 1 is larger than the pixel point G21And pixel point G21Are located on the four edges of the first downsampling window 1, respectively. First downsampling window C1AC3C6(hereinafter, will be referred to as the first downsampling window 2) has a center of C2The area of the first down-sampling window 2 is larger than the pixelPoint G12And pixel point G12Are located on the four edges of the first downsampling window 2, respectively. First downsampling window QC9C14O (which will be referred to as the first downsampling window 3 hereinafter) is centered at C13The area of the first down-sampling window 3 is larger than the pixel point G41And pixel point G41Are located on four sides of the first downsampling window 3, respectively. First downsampling window C9C6C11C14(hereinafter, will be referred to as the first downsampling window 4) has a center of C10The area of the first down-sampling window 4 is larger than the pixel point G32And pixel point G32Are located on four sides of the first downsampling window 4, respectively. First downsampling window C6C3C8C11(which will be referred to as the first downsampling window 5 hereinafter) has a center of C7The area of the first down-sampling window 5 is larger than the pixel point G23And pixel point G23Are located on the four edges of the first downsampling window 5, respectively. First downsampling window C3DFC8(hereinafter, will be referred to as the first downsampling window 6) has a center of C4The area of the first lower sampling window 6 is larger than the pixel point G14And pixel point G14Are located on the four edges of the first downsampling window 6, respectively. First downsampling window C14C11C16L (which will be referred to as the first downsampling window 7 hereinafter) has a center C15The area of the first down-sampling window 7 is larger than the pixel point G43And pixel point G43Are located on four sides of the first downsampling window 7, respectively. First downsampling window C11C8IC16(hereinafter, will be referred to as the first downsampling window 8) has a center of C12The area of the first down-sampling window 8 is larger than the pixel point G34And pixel point G34Are located on the four edges of the first downsampling window 8, respectively.
Taking the pixel point region in the first down-sampling window 1 as a pixel point D in the first image12According to the first down-sampling windowDetermining pixel point D from pixel values in Port 112The pixel value of (2). Taking the pixel point region in the first down-sampling window 2 as the pixel point D in the first image13Determining a pixel point D according to the pixel value in the first down-sampling window 213The pixel value of (2). Taking the pixel point region in the first down-sampling window 3 as the pixel point D in the first image21Determining a pixel point D according to the pixel value in the first down-sampling window 321The pixel value of (2). Taking the pixel point region in the first down-sampling window 4 as the pixel point D in the first image22Determining a pixel point D according to the pixel value in the first down-sampling window 422The pixel value of (2). Taking the pixel point region in the first down-sampling window 5 as the pixel point D in the first image23Determining a pixel point D according to the pixel value in the first down-sampling window 523The pixel value of (2). Taking the pixel point region in the first down-sampling window 6 as the pixel point D in the first image24Determining a pixel point D according to the pixel value in the first down-sampling window 624The pixel value of (2). Taking the pixel point region in the first down-sampling window 7 as the pixel point D in the first image32Determining a pixel point D according to the pixel value in the first down-sampling window 732The pixel value of (2). Taking the pixel point region in the first down-sampling window 8 as the pixel point D in the first image33Determining a pixel point D according to the pixel value in the first down-sampling window 833The pixel value of (2). Optionally, a mean value of pixel values in each first downsampling window is determined to be a pixel value of a pixel point corresponding to the first downsampling window, for example, the mean value of the pixel values in the first downsampling window 1 is used as a pixel point D12The pixel value of (2).
It should be understood that in fig. 6, the following pixel regions are all the first filled pixels: triangle region ABW, triangle region DEC, triangle region FGE, triangle region IJH, triangle region LMK, triangle region PQN, triangle region RSQ, triangle region UVT. The pixel values in the first filling pixel point are all first values, and optionally, the first values are 0. In fig. 9, the following pixels are all the second filling pixels: pixel point D11Photo and imagePlain dot D14Pixel point D31Pixel point D34. The pixel value of the second filling pixel point is used for representing the green brightness degree, namely the second filling pixel point is the pixel point of the G channel. And the pixel values of the second filling pixel points are all second values. Optionally, the second value is 0.
As can be seen from fig. 6, in the first image to be denoised, a non-G channel pixel exists between pixels of any two G channels. Because the information carried by the pixel points of the G channel is different from the information carried by the pixel points of the non-G channel, the image noise reduction processing is directly carried out on the first image to be subjected to noise reduction, and the noise reduction effect is reduced.
As can be seen from fig. 7, in the first image obtained by performing the first downsampling on the first image to be denoised, all the pixel points are pixel points of the G channel except the second filling pixel point. Similarly, in a second image obtained by performing first downsampling on a second image to be denoised, all pixel points are pixel points of the G channel except for the second filling pixel points. Because the second filling pixel point is the pixel point of the G channel, the noise reduction effect can be improved by carrying out image noise reduction processing on the first image.
503. And carrying out noise reduction processing on the first image to obtain a second image.
In the embodiment of the present application, the noise reduction processing may be implemented by any method capable of implementing image noise reduction. The method comprises one of the following steps: mean filtering, bilateral filtering, gaussian filtering, statistical ordering filtering.
By carrying out noise reduction processing on the first image, noise reduction of the first channel in the first noise-reduced image can be realized, and a second image is obtained.
In the embodiment of the application, the first image is obtained by performing the first downsampling processing on the first image to be denoised, so that the discontinuous image is converted into the continuous image, and then the denoising processing can be performed on the continuous image. Because the ratio of the resolution of the first image to be registered is greater than 0.25, the image processing device can improve the noise reduction effect of the first channel of the first image to be noise-reduced by performing noise reduction processing on the first image.
In addition, in the embodiment of the present application, in the process of performing the first downsampling processing on the first image to be denoised by the image processing apparatus, the pixel value in the first image is determined according to the pixel value in each first downsampling window. Since each first down-sampling window not only includes the first-type pixel points, but also includes pixel points other than the first-type pixel points (hereinafter, referred to as non-first-type pixel points), the pixel values in the first image are determined according to the pixel values of the first-type pixel points and the pixel values of the non-first-type pixel points. Therefore, the first image to be denoised is subjected to first downsampling processing, so that noise in the first type of pixel points can be suppressed, and the first image is obtained. In this way, the second image with less noise can be obtained by performing noise reduction processing on the first image, so that the noise reduction effect on the first image to be subjected to noise reduction is improved.
As an alternative embodiment, the arrangement manner of the pixel points in the first image to be denoised is a diagonal array, wherein the meaning of the diagonal array can be seen in the following:
it is assumed that the first image to be noise-reduced includes: the image processing device comprises a first pixel point, a second pixel point, a third pixel point and a fourth pixel point. The coordinate of the first pixel point is (i, j), the coordinate of the second pixel point is (i +1, j), the coordinate of the third pixel point is (i, j +1), the coordinate of the fourth pixel point is (i +1, j +1), wherein i and j are positive integers. Under the condition that the first pixel point is the first-class pixel point, the second pixel point and the third pixel point are not the first-class pixel point, the fourth pixel point is the first-class pixel point, under the condition that the first pixel point is not the first-class pixel point, the second pixel point and the third pixel point are both the first-class pixel point, and the fourth pixel point is not the first-class pixel point.
For example, as shown in fig. 7a, in the case that the first pixel is the first-type pixel, neither the second pixel nor the third pixel is the first-type pixel, and the fourth pixel is the first-type pixel. As shown in fig. 7b, in the case that the first pixel is not the first-type pixel, the second pixel and the third pixel are both the first-type pixel, and the fourth pixel is not the first-type pixel.
As can be seen from fig. 7a and 7b, in the case where the pixels are arranged in a diagonal array, the arrangement of the pixels in the image is as shown in fig. 8a or as shown in fig. 8 b.
Optionally, the arrangement mode of the pixel points in the first image to be denoised is a bayer array.
Referring to fig. 10, fig. 10 is a flowchart illustrating a method for implementing step 502 according to an embodiment of the present disclosure.
1001. And rotating the first image to be denoised by a first angle to obtain a third image.
In the embodiment of the present application, the first angle is an odd multiple of 45 degrees. Assume that the first angle is J1,J1Satisfies the following formula:
J1=r1x 45 degree … formula (1)
Wherein r is1Are all odd numbers.
For example, assume that: and the rotation angle obtained by clockwise rotating the first image to be subjected to noise reduction is positive, and the rotation angle obtained by anticlockwise rotating the first image to be subjected to noise reduction is negative. At r1When the first angle is 45 degrees, the first image to be noise-reduced is rotated by 45 degrees clockwise, so that the third image is obtained. At r1When the first angle is-45 degrees, the first image to be noise-reduced is rotated 45 degrees counterclockwise, and a third image is obtained. At r1When the first angle is 135 degrees, 3 degrees, the first image to be noise-reduced is rotated clockwise by 135 degrees, so that a third image is obtained. At r1When the first angle is-5 degrees to-225 degrees, the first image to be noise-reduced is rotated by 225 degrees counterclockwise, and a third image is obtained.
For another example, assume: and the rotation angle obtained by rotating the first image to be subjected to noise reduction counterclockwise is positive, and the rotation angle obtained by rotating the first image to be subjected to noise reduction clockwise is negative. At r1When the first angle is 45 degrees, the first image to be noise-reduced is rotated 45 degrees counterclockwise to obtain a third image. At r1When the first angle is-45 degrees, the first image to be noise-reduced is rotated by 45 degrees clockwise to obtain a third image. At r1When the first angle is 135 degrees, 3 degrees, the first image to be noise-reduced is rotated by 135 degrees counterclockwise, and a third image is obtained. At r1When the first angle is-5 degrees to-225 degrees, the first image to be noise-reduced is rotated clockwise by 225 degrees to obtain a third image.
In a possible implementation manner, the rotating the first image to be noise-reduced by the first angle may be rotating the first image to be noise-reduced by the first angle around an origin of a pixel coordinate system of the first image to be noise-reduced, for example, the pixel coordinate system of the first image to be noise-reduced is xoy, and the origin of the pixel coordinate system is o. And rotating the first image to be denoised by a first angle around o to obtain a third image.
In another possible implementation manner, the first image to be noise-reduced is rotated by a first angle, which may be that the first image to be noise-reduced is rotated by the first angle around the center of the first image to be noise-reduced, where the center of the first image to be noise-reduced is an intersection of two diagonal lines of the first image to be noise-reduced. For example, the third image shown in fig. 12 can be obtained by rotating the first image to be noise-reduced shown in fig. 11 by 45 degrees around the center of the first image to be noise-reduced.
In yet another possible implementation manner, the rotating the first image to be noise-reduced by the first angle may be rotating the first image to be noise-reduced by the first angle around a coordinate axis of a pixel coordinate system of the first image to be noise-reduced. For example, the pixel coordinate system of the first image to be denoised is xoy, and the abscissa axis of the pixel coordinate system is ox. And rotating the first image to be denoised by a first angle around the ox to obtain a third image. For another example, the pixel coordinate system of the first image to be denoised is xoy, and the ordinate axis of the pixel coordinate system is oy. And rotating the first image to be denoised by a first angle around oy to obtain a third image.
On the premise that the rotation angle is the first angle, the method for rotating the first to-be-denoised image is not limited. In a similar way, on the premise that the rotation angle is the second angle, the method for the second rotation to-be-denoised image is not limited.
1002. And amplifying the coordinate axis scale of the first pixel coordinate system by n times to obtain a second pixel coordinate system.
In the embodiment of the present application, the first pixel coordinate system is a pixel coordinate system of the third image.
In the embodiment of the application, n is a positive number. Alternatively to this, the first and second parts may,
Figure BDA0002563585550000131
and obtaining a second pixel coordinate system by amplifying the abscissa axis scale and the ordinate axis scale of the first pixel coordinate system by n times.
For example, assuming that n √ 2, the second pixel coordinate system shown in fig. 13 is obtained by enlarging the abscissa axis scale and the ordinate axis scale of the first pixel coordinate system (i.e., xoy) shown in fig. 12.
1003. And determining the pixel value of each pixel point in the second pixel coordinate system according to the pixel value of the pixel point in the third image to obtain the first image.
Because the scale of the pixel coordinate system takes the pixel point as a unit, that is, the scale of the pixel coordinate system is the side length of one pixel point, under the condition that the scale of the pixel coordinate system of the image is changed, the area covered by the pixel point in the image is also correspondingly changed. And the image processing device determines the pixel value of each pixel point under the second pixel coordinate system according to the pixel value of the pixel point in the third image to obtain the first image. Optionally, the image processing apparatus uses an average value of pixel values in an area covered by each pixel point in the second pixel coordinate system as a pixel value of a pixel point in the first image.
For example, the image processing apparatus determines the pixel value of each pixel point in the second pixel coordinate system (i.e., xoy) according to the pixel value of the pixel point in the third image shown in fig. 13, so as to obtain the first image shown in fig. 14. In fig. 13, the following pixel regions are all the third filling pixels: triangle area ABW, triangle area DEC, triangle area GHF, triangle area HIJ, triangle area KLM, triangle area NPQ, triangle area RST, triangle area TUV. The pixel values in the third filling pixel point region are all third values, and optionally, the third value is 0. In fig. 14, the following pixels are all fourth filling pixels: pixel point D11Pixel point D14Pixel point D31Pixel point D34. The pixel value of the fourth filling pixel point is used for representing the green brightness degree, namely the fourth filling pixel point is a pixel point of the G channel. The pixel values of the fourth filling pixel points are all fourth values, and optionally, the fourth value is 0.
The third image is obtained by rotating the first image to be denoised. The first image is obtained by adjusting the coordinate axis scale of the pixel coordinate system of the third image, so that the discontinuous image can be converted into the continuous image, and the effects of reducing the data processing amount and improving the processing speed can be achieved.
As an alternative embodiment, after obtaining the second image, the image processing apparatus executes a flowchart of the method shown in fig. 15.
1501. And extracting a second channel in the first image to be denoised to obtain a fourth image.
In the embodiment of the present application, the second channel is a channel different from the first channel in the first image to be denoised. For example, the first image to be noise-reduced contains R, G two channels. In the case where the first channel is a G channel, the R channel is a second channel.
And extracting a second channel in the first image to be denoised to obtain a fourth image. The size of the fourth image is the same as the size of the first image to be noise-reduced. In the fourth image, the pixel values of the pixel points of the second channel are the same as those of the pixel points of the second channel in the first image to be denoised, the pixel points except the pixel points of the second channel are all fifth filling pixel points, and the pixel values of the fifth filling pixel points are all fifth values. Optionally, the fifth value is 0.
For example, the first image to be noise-reduced shown in fig. 16a contains R, G channels, and the G channel in the first image to be noise-reduced is extracted, resulting in the fourth image shown in fig. 16 b. Pixel G in first image to be denoised12Pixel value of (2) and pixel point G in the fourth image12Has the same pixel value, and the pixel point G in the first image to be denoised14Pixel value of (2) and pixel point G in the fourth image14…, in the first image to be denoisedPixel point G44Pixel value of (2) and pixel point G in the fourth image44The pixel values of (a) are the same. In the fourth image, pixel point N11Pixel value, pixel point N13Pixel value, pixel point N22Pixel value, pixel point N24Pixel value, pixel point N31Pixel value, pixel point N33Pixel value, pixel point N42Pixel value, pixel point N44The pixel values of (2) are all 0.
1502. And performing upsampling processing on the third image to obtain a fifth image.
In the embodiment of the present application, the up-sampling magnification in the up-sampling process is equal to the length of the image after the up-sampling process/the length of the image before the up-sampling process is equal to the width of the image after the up-sampling process/the width of the image before the up-sampling process. For example, the size of the RAW image shown in fig. 17a is 2 × 2, and the image with the size of 4 × 4 shown in fig. 17b can be obtained by performing up-sampling processing on the image by 2 times. In fig. 17b, the following pixels are the sixth filling pixel: pixel point N12Pixel point N14Pixel point N21Pixel point N22Pixel point N23Pixel point N24Pixel point N32Pixel point N33Pixel point N41Pixel point N42Pixel point N43Pixel point N44. And the pixel values of the sixth filling pixel points are all sixth values. An optional sixth value is 0.
The implementation of the first upsampling process may be one of the following: bilinear interpolation processing, nearest interpolation processing, high-order interpolation and deconvolution processing, and the specific implementation mode of the upsampling layer is not limited in the application.
In the embodiment of the present application, the up-sampling magnification of the first up-sampling process and the down-sampling magnification of the first down-sampling process are reciprocal. For example, when the down-sampling magnification of the first down-sampling process is a, the up-sampling magnification of the first up-sampling process is 1/a. Therefore, by performing the first upsampling process on the third image, the size of the third image can be enlarged to the size of the first image to be denoised, resulting in a fifth image.
1503. And combining the fourth image and the fifth image by taking the fourth image and the fifth image as one channel respectively to obtain a sixth image.
And combining the fourth image and the fifth image to obtain a sixth image comprising two channels.
Since the human eye is more sensitive to the information contained in the first channel than the information contained in the second channel, the noise reduction effect of the first image to be noise-reduced depends mainly on the noise reduction effect of the first channel in the first image to be noise-reduced. By combining the fourth image and the fifth image, the noise reduction of the first image to be noise reduced can be realized.
Because based on the technical scheme provided by the embodiment of the application, the noise reduction processing is carried out on the first image, and the noise reduction effect can be improved, the noise reduction effect of the first image to be subjected to noise reduction can be improved.
Referring to fig. 18, fig. 18 is a flowchart illustrating a method for implementing step 1502 when the first image is obtained through steps 1001 to 1003 according to an embodiment of the present disclosure.
1801. And rotating the second image by a second angle to obtain a seventh image.
In the embodiment of the present application, the second angle is a terminal edge identical angle of the third angle, and the third angle and the first angle are opposite numbers. For example, assuming that the first angle is 45 degrees, the third angle is-45 degrees and the second angle is the same angle at the end of-45 degrees.
In one possible implementation, rotating the second image by the second angle may be rotating the second image by the second angle around an origin of a pixel coordinate system of the second image, for example, the pixel coordinate system of the second image is xoy, and the origin of the pixel coordinate system is o. A seventh image is obtained by rotating the second image by a second angle around o.
In another possible implementation, the rotating the second image by the second angle may be rotating the second image by the second angle around a center of the second image, where the center of the second image is an intersection of two diagonal lines of the second image.
In yet another possible implementation, the rotating the second image by the second angle may be rotating the second image by the second angle around a coordinate axis of a pixel coordinate system of the second image. For example, the pixel coordinate system of the second image is xoy, and the abscissa axis of the pixel coordinate system is ox. A seventh image is obtained by rotating the second image by a second angle around ox. For another example, the pixel coordinate system of the second image is xoy, and the ordinate axis of the pixel coordinate system is oy. A seventh image is obtained by rotating the second image by a second angle around oy.
1802. And reducing the coordinate axis scale of the third pixel coordinate system by the factor of n to obtain a fourth pixel coordinate system.
In the embodiment of the present application, the third pixel coordinate system is the pixel coordinate system of the seventh image.
N in this step is the same as n in step 1002. And reducing the abscissa axis scale and the ordinate axis scale of the third pixel coordinate system by n times to obtain a fourth pixel coordinate system.
1803. And determining the pixel value of each pixel point in the fourth pixel coordinate system according to the pixel value of the pixel point in the seventh image to obtain the fifth image.
As described above, when the scale of the pixel coordinate system of the image is changed, the area covered by the pixel points in the image is also changed accordingly. And the image processing device determines the pixel value of each pixel point under the fourth pixel coordinate system according to the pixel value of the pixel point in the seventh image to obtain a fifth image.
In a possible implementation manner, the image processing apparatus uses an average value of pixel values in an area covered by each pixel point in the fourth pixel coordinate system as a pixel value of the pixel point.
In other possible implementation manners, the pixel value of each pixel point in the fifth image is determined by:
regarding a pixel point with the center being the center of the first-class pixel point, taking the pixel value of the first-class pixel point corresponding to the pixel point as the pixel value of the pixel point;
and taking the pixel value of the pixel point as a seventh value for the pixel point of which the center is not the center of the first-type pixel point. Optionally, the seventh value is 0.
For example, the second image shown in fig. 19a is rotated 45 degrees counterclockwise, resulting in a seventh image shown in fig. 19 b. The scale of the coordinate axis of the pixel coordinate system of the seventh image shown in fig. 19b is reduced by n times, and the fifth image shown in fig. 19c is obtained. In the fifth image shown in fig. 19c, the centers of none of the following pixel points are the centers of the first type pixel points: pixel point N11Pixel point N13Pixel point N22Pixel point N24Pixel point N31Pixel point N33Pixel point N42Pixel point N44. The pixel values of the pixel points are all seventh values.
Because in fig. 19b and 19c, pixel point D13Center and pixel point G12Have the same center, pixel point D24Center and pixel point G12Have the same center, pixel point D12Center and pixel point G21Have the same center, pixel point D23Center and pixel point G23Have the same center, pixel point D22Center and pixel point G32Have the same center, pixel point D33Center and pixel point G34Have the same center, pixel point D21Center and pixel point G41Have the same center, pixel point D32Center and pixel point G43Are the same at the center. Therefore, the pixel point D13Pixel value and pixel point G12Has the same pixel value, pixel point D24Pixel value and pixel point G12Has the same pixel value, pixel point D12Pixel value and pixel point G21Has the same pixel value, pixel point D23Pixel value and pixel point G23Has the same pixel value, pixel point D22Pixel value and pixel point G32Has the same pixel value, pixel point D33Pixel value and pixel point G34Has the same pixel value, pixel point D21Pixel value and pixel point G41Has the same pixel value, pixel point D32Pixel value ofAnd pixel point G43The pixel values of (a) are the same.
According to the embodiment, the seventh image is obtained by rotating the second image, and the fifth image is obtained by adjusting the coordinate axis scale of the pixel coordinate system of the second image, so that the effects of reducing the data processing amount and improving the processing speed can be achieved.
It should be understood that in the drawings in the embodiments of the present application, the first image to be noise-reduced includes R, G, B three channels, and the first channel includes G channel, but in practical applications, the first image to be noise-reduced may include three channels other than R, G, B, and the first channel may not be G channel. The drawings provided in the embodiments of the present application are only examples and should not be construed as limiting the present application.
Referring to fig. 20, fig. 20 is a flowchart illustrating another image denoising method according to an embodiment of the present disclosure.
2001. And acquiring a second image to be denoised.
In the embodiment of the present application, the second image to be noise-reduced is a RAW image, and the number of channels in the second image to be noise-reduced is greater than or equal to 2. For example, the second image to be noise reduced contains R, G two channels. As another example, the second image to be noise reduced contains R, G, B three channels. As another example, the second image to be noise reduced contains R, Y, B three channels.
Because human eyes have different sensitivities to different colors, in the case that the RAW image includes at least two color channels, in order to facilitate human eyes to obtain better visual perception and more information by observing the RAW image, the color channel to which the human eyes are most sensitive generally includes the most pixel points in the RAW image. For example, the sensitivity of the human eye to green is higher than the sensitivity of the human eye to red or the sensitivity of the human eye to blue, and therefore, in the case where the RAW image includes R, G, B three channels, the G channel includes the largest number of pixel points. For another example, since the sensitivity of the human eye to yellow is higher than the sensitivity of the human eye to red or the sensitivity of the human eye to blue, when the RAW image includes Y, G, B three channels, the number of pixels included in the Y channel is the largest.
And the channel with the largest number of pixel points contained in the second image to be denoised is called a third channel. For example, in a case where the second image to be noise-reduced includes R, G, B three channels, the G channel is a channel having the largest number of included pixels, and at this time, the third channel is the G channel.
It should be understood that, in the case that the second image to be denoised includes two channels, and the number of the pixel points of each channel is equal, the third channel may be any one of the channels in the second image to be denoised. For example, in the second image to be denoised, the number of pixel points of the R channel is: the number of the pixels of the G channel is 1:1, the third channel may be an R channel, and the third channel may also be a G channel.
In this embodiment of the application, the second to-be-denoised image includes second-type pixel points, where the second-type pixel points include pixel points belonging to a third channel. For example, the third channel is a G channel, and the second image to be denoised includes: the device comprises a pixel point a, a pixel point b, a pixel point c and a pixel point d, wherein the pixel point a and the pixel point c belong to a G channel. At this time, the second type pixel point includes: pixel a, pixel c, pixel e and pixel g.
In one implementation of obtaining the second image to be denoised, the image processing apparatus receives the second image to be denoised input by the user through the input component. The above-mentioned input assembly includes: keyboard, mouse, touch screen, touch pad, audio input device, etc.
In another implementation manner of acquiring a second image to be denoised, the image processing apparatus receives the second image to be denoised sent by the first terminal. Optionally, the second terminal may be any one of the following: cell-phone, computer, panel computer, server, wearable equipment.
In another implementation manner of acquiring the second image to be denoised, the image processing apparatus may acquire the second image to be denoised through the imaging component. Optionally, the imaging component may be a camera.
2002. And extracting a third channel in the second image to be denoised to obtain an eighth image.
And the image processing device extracts a third channel in the second image to be denoised to obtain an eighth image. The size of the eighth image is the same as the size of the second image to be denoised. In the eighth image, the pixel values of the pixel points of the third channel are the same as those of the pixel points of the third channel in the second to-be-denoised image, the pixel points except the pixel points of the third channel are all seventh filling pixel points, and the pixel values of the seventh filling pixel points are all eighth values. Optionally, the eighth value is 0.
For example, the second image to be noise-reduced shown in fig. 21a includes R, G, B three channels, and the G channel in the second image to be noise-reduced is extracted, resulting in the eighth image shown in fig. 21 b. Pixel point G in the second to-be-denoised image12Pixel value of (1) and pixel point G in the eighth image12Has the same pixel value, and the pixel point G in the second image to be denoised14Pixel value of (1) and pixel point G in the eighth image14…, pixel point G in the second to-be-denoised image44Pixel value of (1) and pixel point G in the eighth image44The pixel values of (a) are the same. In the eighth image, pixel point N11Pixel value, pixel point N13Pixel value, pixel point N22Pixel value, pixel point N24Pixel value, pixel point N31Pixel value, pixel point N33Pixel value, pixel point N42Pixel value, pixel point N44The pixel values of (2) are all 0.
It should be understood that the eighth image in the embodiment of the present application is an image matrix, and the shapes of the pixel points in the image matrix are all squares, for example, in the eighth image shown in fig. 21b, the pixel points are all squares. In the second image to be denoised, the shape of the pixel point comprises at least one of the following: circular, diamond, oval, rectangular, square.
2003. And carrying out downsampling processing on the eighth image to obtain a ninth image.
Before proceeding to the following explanation, successive images are defined. In the embodiment of the present application, the continuous image means that all the pixel points belong to the same channel, and for convenience of description, the images except the continuous image are hereinafter referred to as non-continuous images. For example, the second image to be noise-reduced shown in fig. 21a is a discontinuous image, and the ninth image shown in fig. 23 is a continuous image.
It should be understood that the continuous image may contain the filling pixel points, and the continuous image shown in fig. 23 contains the ninth filling pixel point. If the pixel points except the filling pixel points in the continuous image are called channel pixel points, no filling pixel point exists between any two adjacent channel pixel points in the continuous image.
In the embodiment of the present application, the ninth image is a continuous image, and the ninth image includes the second type of pixel points.
In the embodiment of the present application, a ratio of the resolution of the ninth image to the resolution of the second image to be denoised is greater than the second threshold. Optionally, the second threshold is 0.25.
If the down-sampling process performed on the second image to be noise-reduced is referred to as a second down-sampling process, the down-sampling magnification of the second down-sampling process is greater than 0.5.
In a possible implementation manner, the second downsampling window for the second downsampling process is square, and the center of the second downsampling window is the same as the center of the second type of pixel points, wherein the center of the second downsampling window is the intersection point of two diagonal lines of the downsampling window, and the center of the second type of pixel points is the intersection point of two diagonal lines of the second type of pixel points. The area of the second down-sampling window is larger than that of the second type pixel points, and the top points of the second type pixel points are located on the boundary of the second down-sampling window.
The image processing means divides the eighth image into at least one pixel point region through at least one second downsampling window. And taking each pixel point region as a pixel point, and determining the pixel value of the pixel point corresponding to the pixel point region according to the pixel value in each pixel point region to realize second downsampling processing of the eighth image.
For example, the ninth image shown in fig. 23 can be obtained by performing the second downsampling process on the eighth image shown in fig. 22. Suppose that in the eighth image shown in FIG. 22, pixel point N is11Has a center of Z1Pixel point G12Has a center of Z2Pixel point N13Has a center of Z3Pixel point G14Has a center of Z4Pixel point G21Has a center of Z5Pixel point N22Has a center of Z6Pixel point G23Has a center of Z7Pixel point N24Has a center of Z8Pixel point N31Has a center of Z9Pixel point G32Has a center of Z10Pixel point N33Has a center of Z11Pixel point G34Has a center of Z12Pixel point G41Has a center of Z13Pixel point N42Has a center of Z14Pixel point G43Has a center of Z15Pixel point N44Has a center of Z16
Second downsampling window TZ1Z6Z9(hereinafter, will be referred to as a second downsampling window 1) has a center Z7The area of the second down-sampling window 1 is larger than the pixel point G21And pixel point G21Are located on four sides of the second downsampling window 1, respectively. Second downsampling window Z1AZ3Z6(hereinafter, will be referred to as a second downsampling window 2) has a center Z2The area of the second down-sampling window 2 is larger than the pixel point G12And pixel point G12Are located on four sides of the second downsampling window 2, respectively. Second downsampling window QZ9Z14O (which will be referred to as a second downsampling window 3 hereinafter) is centered at Z13The area of the second down-sampling window 3 is larger than the pixel point G41And pixel point G41Are located on four sides of the second downsampling window 3, respectively. Second downsampling window Z9Z6Z11Z14(hereinafter, will be referred to as a second downsampling window 4) has a center Z10The area of the second down-sampling window 4 is larger than the pixel point G32And pixel point G32Are located on four sides of the second downsampling window 4, respectively. Second downsampling window Z6Z3Z8Z11(hereinafter, will be referred to as a second downsampling window 5) has a center Z7The area of the second down-sampling window 5 is larger than the pixel point G23And is of an area ofPixel point G23Are located on four sides of the second downsampling window 5, respectively. Second downsampling window Z3DFZ8(hereinafter, will be referred to as a second downsampling window 6) is centered at Z4The area of the second down-sampling window 6 is larger than the pixel point G14And pixel point G14Are located on four sides of the second downsampling window 6. Second downsampling window Z14Z11Z16L (which will be referred to as a second downsampling window 7 hereinafter) is centered at Z15The area of the second down-sampling window 7 is larger than the pixel point G43And pixel point G43Are located on four sides of the second downsampling window 7, respectively. Second downsampling window Z11Z8IZ16(hereinafter, will be referred to as a second downsampling window 8) is centered at Z12The area of the second down-sampling window 8 is larger than the pixel point G34And pixel point G34Are located on four sides of the second downsampling window 8, respectively.
Taking the pixel point region in the second down-sampling window 1 as a pixel point D in the ninth image12Determining a pixel point D according to the pixel value in the second down-sampling window 112The pixel value of (2). Taking the pixel point region in the second down-sampling window 2 as a pixel point D in the ninth image13Determining a pixel point D according to the pixel value in the second down-sampling window 213The pixel value of (2). Taking the pixel point region in the second down-sampling window 3 as a pixel point D in the ninth image21Determining a pixel point D according to the pixel value in the second down-sampling window 321The pixel value of (2). Taking the pixel point region in the second down-sampling window 4 as a pixel point D in the ninth image22Determining a pixel point D according to the pixel value in the second down-sampling window 422The pixel value of (2). Taking the pixel point region in the second down-sampling window 5 as a pixel point D in the ninth image23Determining a pixel point D according to the pixel value in the second down-sampling window 523The pixel value of (2). Taking the pixel point region in the second down-sampling window 6 as a pixel point D in the ninth image24According to a second down-sampling windowPixel value in port 6, determining pixel point D24The pixel value of (2). Taking the pixel point region in the second down-sampling window 7 as a pixel point D in the ninth image32Determining a pixel point D according to the pixel value in the second down-sampling window 732The pixel value of (2). Taking the pixel point region in the second down-sampling window 8 as the pixel point D in the ninth image33Determining a pixel point D according to the pixel value in the second down-sampling window 833The pixel value of (2). Optionally, a mean value of pixel values in each second downsampling window is determined to be a pixel value of a pixel point corresponding to the second downsampling window, for example, the mean value of pixel values in the second downsampling window 1 is used as a pixel point D12The pixel value of (2).
It should be understood that, in fig. 22, the following pixel regions are all the eighth filling pixels: triangle region ABW, triangle region DEC, triangle region FGE, triangle region IJH, triangle region LMK, triangle region PQN, triangle region RSQ, triangle region UVT. And the pixel values in the eighth filling pixel point region are all ninth values. Optionally, the ninth value is 0. In fig. 23, the following pixels are all the ninth filling pixel: pixel point D11Pixel point D14Pixel point D31Pixel point D34. And the pixel value of the ninth filling pixel point is used for representing the green brightness degree, namely the ninth filling pixel point is the pixel point of the G channel. And the pixel values of the ninth filling pixel point are all tenth values. Optionally, the tenth value is 0.
As can be seen from fig. 22, in the eighth image, a non-G channel pixel exists between any two G channel pixels. Because the information carried by the pixel points of the G channel is different from the information carried by the pixel points of the non-G channel, the image noise reduction processing is directly carried out on the second image to be subjected to noise reduction, and the noise reduction effect is reduced.
As can be seen from fig. 23, in the ninth image obtained by performing the second downsampling processing on the eighth image, all the pixel points are pixel points of the G channel except the seventh filling pixel point. Because the seventh filling pixel point is the pixel point of the G channel, the image denoising processing is carried out on the ninth image, and the denoising effect can be improved.
2004. And carrying out noise reduction processing on the ninth image to obtain a tenth image.
In the embodiment of the present application, the noise reduction processing may be implemented by any method capable of implementing image noise reduction. The method comprises one of the following steps: mean filtering, bilateral filtering, gaussian filtering, statistical ordering filtering.
And denoising the ninth image to realize denoising of a third channel in the second denoised image to obtain a tenth image.
In the embodiment of the present application, the ninth image is obtained by performing the second downsampling process on the eighth image, so that the discontinuous image is converted into the continuous image, and further, the noise reduction process can be performed on the continuous image. Because the ratio of the resolution of the ninth image to the resolution of the second image to be denoised is greater than 0.25, the image processing device can improve the denoising effect of the third channel of the second image to be denoised by denoising the eighth image.
Further, in the embodiment of the present application, the image processing apparatus determines the pixel value in the ninth image in accordance with the pixel value in each of the second downsampling windows in the process of performing the second downsampling process on the eighth image. In each second down-sampling window, besides the second-class pixel points, filling pixel points are also arranged, and the pixel value in the ninth image is determined according to the pixel value of the second-class pixel points and the pixel value of the filling pixel points. Therefore, the reduction of the resolution ratio of the second type of pixel points can be reduced, and a ninth image is obtained. Therefore, the image processing device can reduce the reduction of the resolution of the second type of pixel points by performing the second down-sampling processing on the eighth image, and obtain the ninth image with higher resolution, thereby improving the noise reduction effect on the second image to be denoised.
As an optional implementation manner, the arrangement manner of the pixel points in the second image to be denoised is a diagonal array, where the meaning of the diagonal array can be seen below:
it is assumed that the second image to be noise-reduced includes: the fifth pixel point, the sixth pixel point, the seventh pixel point and the eighth pixel point. The coordinates of the fifth pixel point are (p, q), the coordinates of the sixth pixel point are (p +1, q), the coordinates of the seventh pixel point are (p, q +1), the coordinates of the eighth pixel point are (p +1, q +1), wherein p and q are positive integers. Under the condition that the fifth pixel point is the second-type pixel point, the sixth pixel point and the seventh pixel point are not the second-type pixel point, the eighth pixel point is the second-type pixel point, under the condition that the fifth pixel point is not the second-type pixel point, the sixth pixel point and the seventh pixel point are both the second-type pixel point, and the sixteenth pixel point of the eighth pixel point is not the second-type pixel point.
For example, as shown in fig. 24a, in a case where the fifth pixel is the second-type pixel, neither the sixth pixel nor the seventh pixel is the second-type pixel, and the eighth pixel is the second-type pixel. As shown in fig. 22b, in the case that the fifth pixel is not the second-type pixel, the sixth pixel and the seventh pixel are both the second-type pixels, and the eighth pixel is not the second-type pixel.
As can be seen from fig. 24a and 24b, in the case where the pixels are arranged in a diagonal array, the arrangement of the pixels in the image is as shown in fig. 25a or as shown in fig. 25 b.
Optionally, the arrangement mode of the pixel points in the second image to be denoised is a bayer array.
Referring to fig. 26, fig. 26 is a flowchart illustrating a method for implementing step 2003 according to an embodiment of the present disclosure.
2601. And rotating the eighth image by a fourth angle to obtain an eleventh image.
In the embodiment of the present application, the fourth angle is an odd multiple of 45 degrees. Assume the second angle is J2,J2Satisfies the following formula:
J2=r2x 45 degree … formula (2)
Wherein r is2
For example, assume that: the rotation angle by rotating the eighth image clockwise is positive, and the rotation angle by rotating the eighth image counterclockwise is negative. At r2When the fourth angle is 45 degrees when 1 is defined, the eighth image is rotated by 45 degrees clockwise to obtain a tenth imageAn image. At r2When the fourth angle is-45 degrees at-1, the eighth image is rotated by 45 degrees counterclockwise, and the eleventh image is obtained. At r2When the fourth angle is 135 degrees, the eighth image is rotated by 135 degrees clockwise, thereby obtaining an eleventh image. At r2When the eighth image is rotated by 225 degrees counterclockwise, the eleventh image is obtained when the fourth angle is-5 degrees or-225 degrees.
For another example, assume: the rotation angle by rotating the eighth image counterclockwise is positive, and the rotation angle by rotating the eighth image clockwise is negative. At r2When the fourth angle is 45 degrees when 1 is assumed, the eighth image is rotated 45 degrees counterclockwise to obtain an eleventh image. At r2When the fourth angle is-45 degrees at-1, the eighth image is rotated by 45 degrees clockwise, and the eleventh image is obtained. At r2When the fourth angle is 135 degrees, the eighth image is rotated by 135 degrees counterclockwise to obtain an eleventh image. At r2When the eighth image is rotated by 225 degrees clockwise when the fourth angle is-5 degrees and-225 degrees, the eleventh image is obtained.
In a possible implementation manner, the image processing apparatus rotates the eighth image by a fourth angle, which may be that the eighth image is rotated by the fourth angle around an origin of a pixel coordinate system of the second image to be denoised, for example, the pixel coordinate system of the eighth image is xoy, and the origin of the pixel coordinate system is o. An eleventh image is obtained by rotating the eighth image by a fourth angle around o.
In another possible implementation manner, the image processing apparatus rotates the eighth image by a fourth angle, which may be that the eighth image is rotated by the fourth angle around the center of the eighth image, where the center of the eighth image is an intersection of two diagonal lines of the eighth image. For example, the eleventh image shown in fig. 28 can be obtained by rotating the eighth image shown in fig. 27 by 45 degrees around the center of the eighth image.
In yet another possible implementation manner, the image processing apparatus rotates the eighth image by a fourth angle, which may be to rotate the seventh image by the fourth angle around a coordinate axis of a pixel coordinate system of the eighth image. For example, the pixel coordinate system of the eighth image is xoy, and the abscissa axis of the pixel coordinate system is ox. An eleventh image is obtained by rotating the eighth image by a fourth angle around ox. For another example, the pixel coordinate system of the eighth image is xoy, and the ordinate axis of the pixel coordinate system is oy. An eleventh image is obtained by rotating the eighth image by a fourth angle around oy.
On the premise that the rotation angle is the fourth angle, the method for rotating the eighth image is not limited in the present application.
2602. And (5) magnifying the coordinate axis scale of the fifth pixel coordinate system by m times to obtain a sixth pixel coordinate system.
In the embodiment of the present application, the fifth pixel coordinate system is the pixel coordinate system of the tenth image, and the ninth pixel coordinate system is the pixel coordinate system of the seventeenth image.
In the embodiment of the present application, m is a positive number, and optionally,
Figure BDA0002563585550000201
the image processing apparatus obtains a sixth pixel coordinate system by enlarging both the abscissa axis scale and the ordinate axis scale of the fifth pixel coordinate system by m times.
For example, suppose
Figure BDA0002563585550000202
The sixth pixel coordinate system shown in fig. 29 is obtained by enlarging the abscissa axis scale and the ordinate axis scale of the fifth pixel coordinate system (i.e., xoy) shown in fig. 28.
Similarly, the image processing apparatus magnifies both the abscissa axis scale and the ordinate axis scale of the ninth pixel coordinate system by m times, and obtains the tenth pixel coordinate system.
2603. And determining the pixel value of each pixel point in the sixth pixel coordinate system according to the pixel value of the pixel point in the eleventh image to obtain the ninth image.
Because the scale of the pixel coordinate system takes the pixel point as a unit, that is, the scale of the pixel coordinate system is the side length of one pixel point, under the condition that the scale of the pixel coordinate system of the image is changed, the area covered by the pixel point in the image is also correspondingly changed. And the image processing device determines the pixel value of each pixel point under a sixth pixel coordinate system according to the pixel value of the pixel point in the eleventh image to obtain a ninth image. Optionally, the image processing apparatus uses an average value of pixel values in an area covered by each pixel point in the sixth pixel coordinate system as a pixel value of a pixel point in the ninth image.
For example, the image processing apparatus determines the pixel values of the respective pixel points in the sixth pixel coordinate system (i.e., xoy) according to the pixel values of the pixel points in the eleventh image shown in fig. 29, and may obtain the ninth image shown in fig. 30. In fig. 29, the following pixel regions are all tenth filling pixels: triangle area ABW, triangle area DEC, triangle area GHF, triangle area HIJ, triangle area KLM, triangle area NPQ, triangle area RST, triangle area TUV. The pixel values in the tenth filling pixel point region are all the eleventh values. Optionally, the eleventh value is 0. In fig. 30, the following pixels are all the eleventh filling pixels: pixel point D11Pixel point D14Pixel point D31Pixel point D34. The pixel value of the eleventh filling pixel point is used for representing the brightness degree of green, namely the eleventh filling pixel point is a pixel point of a G channel. And the pixel values of the eleventh filling pixel points are all tenth values. Optionally, the tenth value is 0.
The present embodiment obtains the eleventh image by rotating the eighth image. The ninth image is obtained by adjusting the coordinate axis scale of the pixel coordinate system of the eleventh image, so that the discontinuous image can be converted into the continuous image, and the effects of reducing the data processing amount and improving the processing speed can be achieved.
Referring to fig. 31, fig. 31 is a schematic flowchart illustrating another implementation method of step 2003 provided in this embodiment of the present application.
3101. A twelfth image is constructed.
In the embodiment of the present application, the twelfth image includes the second type of pixel points in the second to-be-denoised image. For example, assume that the third channel is a G channel. The second image to be denoised comprises: the device comprises a pixel point a, a pixel point b, a pixel point c and a pixel point d, wherein the pixel point a and the pixel point c belong to a G channel. The twelfth image includes: pixel a and pixel c.
The twelfth image may only include the second type of pixel points in the second image to be denoised, or may include pixel points other than the second type of pixel points in the second image to be denoised. The size of the twelfth image and the size of the nineteenth image are not limited in the present application.
For example, in the second image to be denoised shown in fig. 32, the second type of pixel points include: pixel point G12Pixel point G14Pixel point G21Pixel point G23Pixel point G32Pixel point G34Pixel point G41Pixel point G43. Based on the second type of pixel points in the second image to be denoised shown in fig. 32, the image processing apparatus may construct a twelfth image as shown in fig. 33, may also construct a twelfth image as shown in fig. 34, and may also construct a twelfth image as shown in fig. 35. In the twelfth image shown in fig. 35, the following pixel points are twelfth filling pixel points: pixel point P1、P2、P3、P4And the pixel value of the twelfth filling pixel point is the thirteenth value. Optionally, the thirteenth value is 0.
3102. The ninth image is obtained by reducing the pixel value in the twelfth image by s times.
Since the pixel values in the eighth image are changed by performing the second downsampling process on the eighth image, the pixel values in the eighth image are different from the corresponding pixel values in the ninth image. Because the pixel values in the ninth image are determined according to the pixel values of the second downsampling windows, and the areas of the second-type pixel points and the areas of the fifth filling pixel points are the same in any one of the second downsampling windows, the ratio of the pixel value in the eighth image to the corresponding pixel value in the ninth image is determined.
For example (example 1), assume that the pixel values in the ninth image are the mean of the pixel values within the second downsampling window. Taking fig. 24 and 25 as an example, pixel point D12Is the mean of the pixel values within the second down-sampling window 1, pixelPoint D21Is the mean of the pixel values within the second down-sampling window 3. Further, assume a pixel point G shown in fig. 2421Has a pixel value of x1Pixel point G41Has a pixel value of x2. In the case where the eighth value is 0, that is, the pixel value of the seventh pixel point is 0, in the ninth image shown in fig. 25, the pixel point D is12Has a pixel value of x1/2, pixel D21Has a pixel value of x2At this time, pixel point G21Pixel value/pixel point D12Pixel value of (1) being pixel point G41Pixel value/pixel point D21I.e. the ratio of the pixel value in the eighth image to the corresponding pixel value in the ninth image is 2. When the eighth value is 1, that is, the pixel value of the seventh pixel point is 1, in the ninth image shown in fig. 25, the pixel point D is12Has a pixel value of (x)1+1)/2, pixel D21Has a pixel value of (x)2+1)/2, at this time, pixel point G21Pixel value/pixel point D12Pixel value of 2+2x1Pixel point G41Pixel value/pixel point D21Pixel value of 2+2x2I.e. the ratio of the pixel values in the eighth image to the corresponding pixel values in the ninth image is 2 plus 2 times the pixel values in the eighth image.
In the embodiment of the present application, s is used to characterize a ratio between a pixel value in the eighth image and a corresponding pixel value in the ninth image. Continuing the example following example 1, where the eighth value is 0, s is 2; if the eighth value is 1, s is 2 plus the pixel value in the eighth image by 2. The specific value of s can be adjusted according to actual requirements, and is not limited in the application.
The image processing device can obtain a first intermediate image containing the pixel points in the ninth image as the ninth image by reducing the pixel values in the twelfth image by s times.
For example, the second image to be noise-reduced shown in FIG. 21a and the second image to be noise-reduced shown in FIG. 32The two images to be denoised are the same. If the third channel of the second image to be denoised is extracted, an eighth image shown in fig. 21b is obtained, and the eighth image is subjected to the second downsampling process (see fig. 22), so that a ninth image shown in fig. 23 can be obtained. If a twelfth image shown in fig. 33 is constructed according to the second type of pixel points in the second image to be denoised, and the pixel value in the twelfth image is reduced by s times, the first intermediate image shown in fig. 36 can be obtained. If a twelfth image shown in fig. 34 is constructed according to the second type of pixel points in the second image to be denoised, and the pixel value in the twelfth image is reduced by s times, the first intermediate image shown in fig. 37 can be obtained. Obviously, the position of the pixel point in the first intermediate image shown in fig. 36 or the position of the pixel point in the first intermediate image shown in fig. 37 may be different from the position of the pixel point in the ninth image shown in fig. 23 (it should be understood that the ninth image shown in fig. 23 is obtained based on the eighth image shown in fig. 22, and the eighth image shown in fig. 22 is obtained by extracting the G channel of the second image to be noise-reduced shown in fig. 32, so that the image 23 is compared with fig. 36 and 37 here). Such as: pixel point D12The ninth image shown in fig. 23 has the position (1, 3) and the pixel point D33The ninth image shown in fig. 23 has a position (3, 2), and the first intermediate image shown in fig. 36 has a pixel point D12Is (1, 2) and pixel point D33Is (2, 3), and in the first intermediate image shown in fig. 37, the pixel point D is12Is (1, 2) and pixel point D33The position of (2) is (4, 1).
In the case where the first intermediate image shown in fig. 36 is taken as the ninth image, the pixel point D is found in the twelfth image shown in fig. 3312The corresponding pixel point is a pixel point G21I.e. pixel point G in the second image to be denoised21Is and pixel point D12And (4) corresponding pixel points. In the case where the first intermediate image shown in fig. 36 is taken as the ninth image, the pixel point D is found in the twelfth image shown in fig. 3333The corresponding pixel point is a pixel point G34I.e. pixel point G in the second image to be denoised34Is and pixel point D33And (4) corresponding pixel points. In the drawingsIn the case where the first intermediate image shown in fig. 37 is the ninth image, the twelfth image shown in fig. 34 is associated with the pixel point D12The corresponding pixel point is a pixel point G21I.e. pixel point G in the second image to be denoised21Is and pixel point D12And (4) corresponding pixel points. In the case where the first intermediate image shown in fig. 37 is taken as the ninth image, the twelfth image shown in fig. 34 is associated with the pixel point D33The corresponding pixel point is a pixel point G34I.e. pixel point G in the second image to be denoised34Is and pixel point D33And (4) corresponding pixel points.
As an alternative implementation, step 3101 performed by the image processing apparatus may include one of the following steps:
31. and arranging at least one second-type pixel point with the center belonging to the same first diagonal line into a line of pixel points of the image according to the ascending order of the abscissa to construct a thirteenth image. And sequencing the lines in the thirteenth image to obtain the twelfth image.
In this embodiment of the application, the diagonal line of the second image to be denoised includes: a first line segment. The first diagonal straight line includes: a straight line passing through the first line segment, and a straight line parallel to the first line segment. For example, the two diagonal lines of the second image to be denoised are: line segment AC and line segment BD, the two diagonals of which are: line segment EG and line segment FH. The first diagonal straight line includes: a line passing through the AC, a line parallel to the AC, or a first diagonal line includes: a straight line passing through the BD, and a straight line parallel to the BD.
In the embodiment of the present application, the second type of pixel points include second type of pixel points in the second to-be-denoised image. For example, the second image to be noise-reduced includes: pixel a, pixel b, pixel c and pixel d. Under the condition that the third channel is a G channel, the second type of pixel points comprise: pixel a and pixel c.
Because the correlation exists between the adjacent pixel points, the twelfth image keeps the position relation between the second type pixel points in the eighth image, and the accuracy of image noise reduction can be improved. Because a rotation angle exists between the second image to be denoised and the ninth image, or a rotation angle exists between a pixel coordinate system of the second image to be denoised and a pixel coordinate system of the ninth image, and the rotation angle is an odd multiple of 45 degrees, at least one second-type pixel point with the center belonging to the same first diagonal line is arranged into a row of pixel points of the images according to the sequence from small to large of the abscissa, a thirteenth image is constructed, the rows in the thirteenth image are sequenced, the position relationship between the second-type pixel points in the eighth image can be kept, and a twelfth image is obtained.
For example (example 2), in the second image to be denoised shown in fig. 38a, the second type of pixel points include: pixel point G12Pixel point G14Pixel point G21Pixel point G23Pixel point G32Pixel point G34Pixel point G34Pixel point G43And the two diagonal lines of the second image to be denoised are as follows: segment OG and segment DJ. Suppose that: segment OG is a first segment, then the first diagonal line includes: straight line CE, straight line AF, straight line OG, straight line LH, straight line KI. As the second type pixel point with the center passing through the straight line CE only has the pixel point G14A pixel point G14As a line of pixel points of the image (which will be referred to as CE line pixel points hereinafter). The second type of pixel points of the center cross straight line AF comprise: pixel point G12Pixel point G23Pixel point G34A pixel point G12Pixel point G23Pixel point G34The pixels are arranged in a row of the image in the order of the abscissa from small to large (hereinafter, referred to as AF row pixels). The second type of pixel points of the center cross straight line LH include: pixel point G21Pixel point G32Pixel point G43A pixel point G21Pixel point G32Pixel point G43The pixels are arranged in a row of pixels of the image (hereinafter referred to as LH row pixels) in the order of ascending abscissa. Because the second type of pixel points with centers passing through the straight line KI only have pixel points G41A pixel point G41As a line of pixels of the image (which will be referred to as KI line of pixels hereinafter). A thirteenth image shown in fig. 38b is constructed based on the CE line pixel points, the AF line pixel points, the LH line pixel points, and the KI line pixel points. In the thirteenth image shown in FIG. 38b, a pixelPoint P1Pixel point P2Pixel point P3Pixel point P4All the pixel values are thirteenth filling pixel points, and the pixel value of the thirteenth filling pixel point is a fourteenth value. Optionally, the fourteenth value is 0.
It should be understood that, in the thirteenth image shown in fig. 38b, the arrangement sequence of the CE line pixels, the AF line pixels, the LH line pixels, and the KI line pixels is only an example, and should not be limited to the present application. In practical application, the arrangement sequence of the CE line pixel points, the AF line pixel points, the LH line pixel points, and the KI line pixel points may be any sequence.
Sorting the rows in the thirteenth image shown in fig. 38b may result in the twelfth image shown in fig. 39a or the twelfth image shown in fig. 39 b.
In an implementation manner of sorting rows in the thirteenth image, a first mean value of a vertical coordinate of each row of pixel points in the thirteenth image is determined, and a first index is obtained according to the first mean value. And arranging the rows in the thirteenth image according to the descending order of the first index to obtain a twelfth image.
The first mean value refers to a mean value of vertical coordinates of all pixel points in each row of pixel points in the thirteenth image. The first index can be obtained according to the first mean value, and the first index and the first mean value line are in positive correlation or negative correlation.
Assume that the first mean value is A1First index t1. In one implementation, A is the first indicator obtained from the first mean value1、t1Satisfies the following formula:
t1=a×A1… formula (3)
Wherein a is a non-0 real number.
In another implementation manner, a obtains the first index according to the first average value1、t1Satisfies the following formula:
t1=a×A1+ b … formula (4)
Wherein a is a non-0 real number and b is a real number.
In one implementation, A is the first indicator obtained from the first mean value1、t1Satisfies the following formula:
Figure BDA0002563585550000241
wherein a is a non-0 real number.
Continuing with example 2, the CE row pixels include pixel G14The mean value of the vertical coordinates of the CE row pixel points is the pixel point G14I.e. the first mean value of the CE row pixels is 1. The AF line pixel point includes: pixel point G12Pixel point G23Pixel point G34Determining pixel point G12Ordinate, pixel point G23Ordinate and pixel point G34The mean value of the ordinate of (2) is 2, that is, the first mean value of the AF row pixel points is 2. The LH row of pixel points comprises: pixel point G21Pixel point G32Pixel point G43Determining pixel point G21Ordinate, pixel point G32Ordinate and pixel point G43The mean value of the ordinate of (a) is 3, i.e. the first mean value of the pixels in the LH row is 3. The KI line pixel points comprise pixel points G41The mean value of the vertical coordinates of the KI line is a pixel point G41The ordinate, i.e. the first mean value of the KI line pixels, is 4. Suppose that: the first mean value is positively correlated with the first index, the first mean value of the CE row pixel points is smaller than the first mean value of the AF row pixel points and smaller than the first mean value of the LH row pixel points and smaller than the first mean value of the KI row pixel points, and the first index of the CE row pixel points is smaller than the first index of the AF row pixel points and smaller than the first index of the LH row pixel points and smaller than the first index of the KI row pixel points. The twelfth image shown in fig. 35a can be obtained by arranging the rows in the thirteenth image in the descending order of the first index. Suppose that: the first mean value is negatively correlated with the first index, and the first index of the CE line pixel is greater than the first index of the AF line pixel and greater than the first index of the LH line pixel and greater than the first index of the KI line pixel. The twelfth image shown in fig. 35b can be obtained by arranging the rows in the thirteenth image in the descending order of the first index.
In another implementation of ordering the rows in the thirteenth image, the rows in the thirteenth image are arranged in the first order, resulting in the twelfth image described above.
In this implementation, the diagonal line of the second image to be denoised further includes a second line segment, which is different from the first line segment. In the embodiment of the present application, the first straight line is a straight line of the second line segment. Under the condition that the second line segment is at the center of the second type pixel point, the first straight line is the straight line of the second line segment; under the condition that the second line segment does not pass through the center of the second-type pixel point, the first straight line is the straight line which is parallel to the second line segment and is closest to the second line segment in the straight lines of the center of the second-type pixel point.
For example, assume that: in the second image to be noise-reduced shown in fig. 38a, the line segment JD is a second line segment. Since the line segment JD is the center of the second type of pixel point, the first straight line is the straight line JD.
For another example, assume: in the second image to be noise-reduced shown in fig. 38a, the line segment OG is a second line segment. The line segment OG is not larger than the center of the second type pixel point, and the first straight line is the straight line which is parallel to the OG and is closest to the OG in the straight lines of the centers of the second type pixel points. A straight line parallel to the OG and passing through the centers of the seven classes of pixel points includes: straight line CE, straight line AF, straight line OG, straight line LH, straight line KI, wherein, the straight line closest to OG includes: straight line AF, straight line LH. Therefore, the first straight line is the straight line AF or the straight line LH.
And (4) calling the pixel point with the center belonging to the first straight line as a first index pixel point, and then each row of pixel points in the thirteenth image comprises one first index pixel point. And taking the sequence of the vertical coordinates of the first index pixel points from large to small as a first sequence, or taking the sequence of the vertical coordinates of the first index pixel points from small to large as a first sequence, and arranging the rows in the thirteenth image according to the first sequence to obtain a twelfth image.
Continuing with example 2, in the second image to be denoised shown in FIG. 38a, the line segment JD is the second line segment. Since the line segment JD is the center of the second type of pixel point, the first straight line is the straight line JD. The pixel point whose center belongs to the first straight line includes: pixel point G14Pixel point G23Pixel point G32Pixel point G41Namely, the first index pixel includes: pixel point G14Pixel point G23Pixel point G32Pixel point G41. Suppose that: the first sequence is the sequence from the great ordinate to the small ordinate of the first index pixel point, because the pixel point G14Ordinate < pixel point G23Ordinate < pixel point G32Ordinate < pixel point G41The rows in the thirteenth image are arranged in the first order to obtain the image shown in fig. 39 a. Suppose that: the first sequence is the sequence from small to large of the ordinate of the first index pixel point, because the pixel point G14Ordinate < pixel point G23Ordinate < pixel point G32Ordinate < pixel point G41The rows in the thirteenth image are arranged in the first order to obtain the image shown in fig. 39 b.
32. And arranging at least one second-type pixel point with the center belonging to the same first diagonal line into a column of pixel points of the image according to the ascending order of the abscissa to construct a fourteenth image. And sorting the columns in the fourteenth image to obtain the twelfth image.
In this step, the meaning of the first diagonal line and the meaning of the second type of pixel point can be referred to in step 31, and will not be described herein again.
Because the correlation exists between the adjacent pixel points, the twelfth image keeps the position relation between the second type pixel points in the eighth image, and the accuracy of image noise reduction can be improved. Because a rotation angle exists between the second image to be denoised and the ninth image, or a rotation angle exists between a pixel coordinate system of the second image to be denoised and a pixel coordinate system of the ninth image, and the rotation angle is an odd multiple of 45 degrees, at least one second-type pixel point with the center belonging to the same first diagonal line is arranged into a row of pixel points of the images according to the sequence from small to large of the abscissa, the fourteenth image is constructed, the rows in the fourteenth image are sequenced, the position relationship between the second-type pixel points in the eighth image can be kept, and the twelfth image is obtained.
For example (example 3), in the second image to be denoised shown in fig. 40a, the second type of pixel points include: pixel point G12Pixel point G14Pixel point G21Pixel point G23Pixel point G32Pixel point G34Pixel point G34Pixel point G43And the two diagonal lines of the second image to be denoised are as follows: segment OG and segment DJ. Suppose that: the line segment OG is a second line segment, the first diagonal straight line includes: straight line CE, straight line AF, straight line OG, straight line LH, straight line KI, pixel point G14. As the second type pixel point with the center passing through the straight line CE only has the pixel point G14A pixel point G14As a column of pixels of the image (which will be referred to as CE column of pixels hereinafter). The second type of pixel points of the center cross straight line AF comprise: pixel point G12Pixel point G23Pixel point G34A pixel point G12Pixel point G23Pixel point G34A column of pixel points of the image (which will be referred to as AF column pixel points hereinafter) is arranged in order from small to large abscissa. The second type of pixel points of the center cross straight line LH include: pixel point G21Pixel point G32Pixel point G43A pixel point G21Pixel point G32Pixel point G43A row of pixel points (which will be referred to as LH row of pixel points hereinafter) of the image are arranged in order from small to large on the abscissa. Because the second type of pixel points with centers passing through the straight line KI only have pixel points G41A pixel point G41As a column of pixels of the image (hereinafter will be referred to as KI column of pixels). And constructing a fourteenth image shown in fig. 40b based on the CE column pixel points, the AF column pixel points, the LH column pixel points, and the KI column pixel points. In the fourteenth image shown in FIG. 40b, pixel P1Pixel point P2Pixel point P3Pixel point P4And all the pixel values are fourteenth filling pixel points, and the pixel value of the fourteenth filling pixel point is a fifteenth value. Optionally, the fifteenth value is 0. It should be understood that, in the fourteenth image shown in fig. 40b, the arrangement sequence of the CE column pixel, the AF column pixel, the LH column pixel, and the KI column pixel is only an example, and should not be construed as a limitation to the present application. In practical applicationThe arrangement sequence of the CE column pixel points, the AF column pixel points, the LH column pixel points and the KI column pixel points can be any sequence.
Sorting the columns in the fourteenth image shown in fig. 40b may result in the twelfth image shown in fig. 41a or the twelfth image shown in fig. 41 b.
In an implementation manner of sorting the columns in the fourteenth image, a second mean value of the ordinate of each column of pixel points in the fourteenth image is determined, and a second index is obtained according to the second mean value. And arranging the columns in the fourteenth image according to the descending order of the second index to obtain a twelfth image.
The second mean value refers to a mean value of vertical coordinates of all pixel points in each row of pixel points in the fourteenth image. And obtaining a second index according to the second average value, wherein the second index and the second average value line are in positive correlation or negative correlation.
Assume that the second mean is A2Second index t2. In one implementation of obtaining the second index according to the second mean value, A2、t2Satisfies the following formula:
t2=d×A2… formula (6)
Wherein d is a non-0 real number.
In another implementation manner of obtaining the second index according to the second average value, A3、t3Satisfies the following formula:
t2=d×A2+ l … formula (7)
Wherein d is a non-0 real number and l is a real number.
In one implementation of obtaining the second index according to the second mean value, A3、t3Satisfies the following formula:
Figure BDA0002563585550000261
wherein d is a non-0 real number.
Continuing with example 3, the CE row pixels include pixel G14The mean value of the vertical coordinates of the CE row pixels is the pixel G14Ordinate of (i.e. number one of CE column pixel points)The two mean values are 1. The AF column pixel includes: pixel point G12Pixel point G23Pixel point G34Determining pixel point G12Ordinate, pixel point G23Ordinate and pixel point G34The mean value of the ordinate of (2) is 2, that is, the second mean value of the pixels in the AF column is 2. The LH row of pixel points comprises: pixel point G21Pixel point G32Pixel point G43Determining pixel point G21Ordinate, pixel point G32Ordinate and pixel point G43The mean value of the ordinate of (a) is 3, i.e. the second mean value of the LH row of pixels is 3. The KI column pixel points comprise pixel points G41The mean value of the vertical coordinates of the KI lines is a pixel point G41The ordinate, i.e. the second mean value of KI row pixel points, is 4. Suppose that: the second mean value is positively correlated with the second index, the second mean value of the CE row pixel points is smaller than the second mean value of the AF row pixel points and smaller than the second mean value of the LH row pixel points and smaller than the second mean value of the KI row pixel points, and the second index of the CE row pixel points is smaller than the second index of the AF row pixel points and smaller than the second index of the LH row pixel points and smaller than the second index of the KI row pixel points. The twelfth image shown in fig. 41a can be obtained by arranging the columns in the fourteenth image in the order of the second index from large to small. Suppose that: the second mean value is negatively correlated with the second index, and the second index of the CE row pixel points is less than the second mean value of the AF row pixel points and less than the second mean value of the LH row pixel points and less than the second mean value of the KI row pixel points, the second index of the CE row pixel points is more than the second index of the AF row pixel points and more than the second index of the LH row pixel points and more than the second index of the KI row pixel points. The twelfth image shown in fig. 41b can be obtained by arranging the columns in the fourteenth image in the order of the second index from large to small.
In another implementation of ordering columns in the fourteenth image, the columns in the fourteenth image are arranged in the second order, resulting in the twelfth image described above.
In this implementation, the diagonal line of the second image to be denoised further comprises a second line segment, which is different from the second line segment. In this embodiment, the second straight line is a straight line of the second line segment. Under the condition that the second line segment is at the center of the second-type pixel points, the second straight line is the straight line of the second line segment; and under the condition that the second line segment does not pass through the center of the second type pixel point, the second straight line is the straight line which is parallel to the second line segment and is closest to the second line segment in the straight lines of the centers of the second type pixel points.
For example, assume that: in the second image to be noise-reduced shown in fig. 40a, the line segment JD is a second line segment. Since the line segment JD is the center of the second type of pixel point, the second line is a straight line JD.
For another example, assume: in the second image to be noise-reduced shown in fig. 40a, the line segment OG is a second line segment. The line segment OG is not larger than the center of the second type pixel point, and the second straight line is the straight line which is parallel to the OG and is closest to the OG in the straight lines of the centers of the second type pixel points. A straight line parallel to the OG and passing through the centers of the seven classes of pixel points includes: straight line CE, straight line AF, straight line OG, straight line LH, straight line KI, wherein, the straight line closest to OG includes: straight line AF, straight line LH. Thus, the second line is either the line AF or the line LH.
And (4) calling the pixel point with the center belonging to the second straight line as a second index pixel point, and then each row of pixel points in the fourteenth image comprises one second index pixel point. And taking the sequence of the vertical coordinates of the second index pixel points from large to small as a second sequence, or taking the sequence of the vertical coordinates of the second index pixel points from small to large as the second sequence, and arranging the rows in the fourteenth image according to the second sequence to obtain a fifteenth image.
Continuing the example following example 3, in the second image to be denoised shown in fig. 40a, the line segment JD is the second line segment. Since the line segment JD is the center of the second type of pixel point, the second line is a straight line JD. The pixel point whose center belongs to the second straight line includes: pixel point G14Pixel point G23Pixel point G32Pixel point G41Namely, the second index pixel point includes: pixel point G14Pixel point G23Pixel point G32Pixel point G41. Suppose that: the second sequence is the sequence from the great ordinate to the small ordinate of the second index pixel point, because the pixel point G14Ordinate < pixel point G23Ordinate < pixel point G32Ordinate < pixel point G41The rows in the fourteenth image are arranged in the second order to obtain the image shown in fig. 41 a. Suppose that: the second sequence is the sequence from small to large of the ordinate of the second index pixel point, because the pixel point G14Ordinate < pixel point G23Ordinate < pixel point G32Ordinate < pixel point G41The columns in the fourteenth image are arranged in the second order, resulting in the image shown in fig. 41 b.
As an alternative embodiment, after obtaining the second image, the image processing apparatus executes a flowchart of the method shown in fig. 15.
4201. And extracting a fourth channel in the first image to be denoised to obtain a fifteenth image.
In this embodiment, the fourth channel is a channel different from the first channel in the first image to be denoised. For example, the first image to be noise-reduced contains R, G two channels. In the case where the first channel is a G channel, the R channel is a fourth channel.
And extracting a fourth channel in the first image to be denoised to obtain a fifteenth image. The size of the fifteenth image is the same as the size of the first image to be noise-reduced. In the fifteenth image, the pixel value of the pixel point of the fourth channel is the same as the pixel value of the pixel point of the fourth channel in the first image to be denoised, the pixel points except the pixel point of the fourth channel are all fifteenth filling pixel points, and the pixel value of the fifteenth filling pixel points is all the sixteenth value. Optionally, the sixteenth value is 0.
For example, the second image to be noise-reduced shown in fig. 43a includes R, G three channels, and the G channel in the second image to be noise-reduced is extracted, resulting in the fifteenth image shown in fig. 43 b. Pixel point G in the second to-be-denoised image12Pixel value of (1) and pixel point G in the fifteenth image12Has the same pixel value, and the pixel point G in the second image to be denoised14Pixel value of (1) and pixel point G in the fifteenth image14…, pixel point G in the second to-be-denoised image44Pixel value of (1) and pixel point G in the fifteenth image44The pixel values of (a) are the same. In the fifteenth image,pixel point N11Pixel value, pixel point N13Pixel value, pixel point N22Pixel value, pixel point N24Pixel value, pixel point N31Pixel value, pixel point N33Pixel value, pixel point N42Pixel value, pixel point N44The pixel values of (2) are all 0.
4202. And performing upsampling processing on the tenth image to obtain a sixteenth image.
In the embodiment of the present application, the up-sampling magnification in the up-sampling process is equal to the length of the image after the up-sampling process/the length of the image before the up-sampling process is equal to the width of the image after the up-sampling process/the width of the image before the up-sampling process. For example, the size of the RAW image shown in fig. 44a is 2 × 2, and the image with the size of 4 × 4 shown in fig. 44b can be obtained by performing up-sampling processing on the image by 2 times. In fig. 44b, the following pixels are the sixth filling pixel: pixel point N12Pixel point N14Pixel point N21Pixel point N22Pixel point N23Pixel point N24Pixel point N32Pixel point N33Pixel point N41Pixel point N42Pixel point N43Pixel point N44. And the pixel values of the sixth filling pixel points are all sixth values. An optional sixth value is 0.
The implementation of the second upsampling process may be one of the following: bilinear interpolation processing, nearest interpolation processing, high-order interpolation and deconvolution processing, and the specific implementation mode of the upsampling layer is not limited in the application.
In the embodiment of the present application, the up-sampling magnification of the second up-sampling process and the down-sampling magnification of the second down-sampling process are reciprocal. For example, when the down-sampling magnification of the second down-sampling process is a, the up-sampling magnification of the second up-sampling process is 1/a. Therefore, by performing the second upsampling process on the tenth image, the size of the tenth image can be enlarged to the size of the first image to be denoised, resulting in a sixteenth image.
4203. And combining the fifteenth image and the sixteenth image by using the fifteenth image and the sixteenth image as one channel, respectively, to obtain a seventeenth image.
The fifteenth image and the sixteenth image are combined to obtain a seventeenth image including two channels.
Since the human eye is more sensitive to the information contained in the third channel than the information contained in the fourth channel, the noise reduction effect of the first image to be noise-reduced mainly depends on the noise reduction effect of the first channel in the first image to be noise-reduced. By combining the fifteenth image and the sixteenth image, noise reduction of the first image to be noise reduced can be realized.
Because based on the technical scheme provided by the embodiment of the application, the noise reduction processing is carried out on the first image, and the noise reduction effect can be improved, the noise reduction effect of the first image to be subjected to noise reduction can be improved.
It should be understood that, in the case that both the second image to be noise-reduced and the number of channels included in the second image to be noise-reduced are greater than 2, the noise reduction of the second image to be noise-reduced and the noise reduction of the second image to be noise-reduced can still be implemented according to steps 4201 to 4203.
Referring to fig. 45, fig. 45 is a flowchart illustrating an implementation method of step 4202 when the ninth image is obtained through steps 2601 to 2603 according to an embodiment of the present application.
4501. And rotating the tenth image by a fifth angle to obtain an eighteenth image.
In the embodiment of the present application, the fifth angle is a terminal edge identical angle of the sixth angle, and the sixth angle and the fourth angle are opposite numbers. For example, assuming that the fourth angle is 45 degrees, the sixth angle is-45 degrees and the fifth angle is the same angle at the end of-45 degrees.
In a possible implementation manner, rotating the tenth image by a fifth angle may be rotating the tenth image by the fifth angle around an origin of a pixel coordinate system of the tenth image, for example, the pixel coordinate system of the tenth image is xoy, and the origin of the pixel coordinate system is o. An eighteenth image is obtained by rotating the tenth image by a fifth angle around o.
In another possible implementation manner, the tenth image is rotated by a fifth angle, which may be that the tenth image is rotated by the fifth angle around the center of the tenth image, where the center of the tenth image is the intersection of two diagonal lines of the tenth image.
In yet another possible implementation, the rotating the tenth image by a fifth angle may be rotating the tenth image by the fifth angle around a coordinate axis of a pixel coordinate system of the tenth image. For example, the pixel coordinate system of the tenth image is xoy, and the abscissa axis of the pixel coordinate system is ox. An eighteenth image is obtained by rotating the tenth image by a fifth angle around ox. For another example, the pixel coordinate system of the tenth image is xoy, and the ordinate axis of the pixel coordinate system is oy. An eighteenth image is obtained by rotating the tenth image by a fifth angle around oy.
4502. And reducing the coordinate axis scale of the seventh pixel coordinate system by m times to obtain an eighth pixel coordinate system.
In the embodiment of the present application, the seventh pixel coordinate system is the pixel coordinate system of the eighteenth image.
M in this step is the same as m in step 2602. And reducing the abscissa axis scale and the ordinate axis scale of the seventh pixel coordinate system by m times to obtain an eighth pixel coordinate system.
4503. And determining the pixel value of each pixel point in the seventh pixel coordinate system according to the pixel value of the pixel point in the eighteenth image to obtain the sixteenth image.
As described above, when the scale of the pixel coordinate system of the image is changed, the area covered by the pixel points in the image is also changed accordingly. And the image processing device determines the pixel value of each pixel point under the seventh pixel coordinate system according to the pixel value of the pixel point in the eighteenth image to obtain a sixteenth image.
In a possible implementation manner, the image processing apparatus uses an average value of pixel values in an area covered by each pixel point in the seventh pixel coordinate system as a pixel value of a pixel point in the sixteenth image.
In other possible implementation manners, the pixel value of each pixel point in the sixteenth image is determined by:
regarding the pixel point with the center being the center of the second type pixel point, taking the pixel value of the second type pixel point corresponding to the pixel point as the pixel value of the pixel point;
and taking the pixel value of the pixel point as the eighteenth value for the pixel point of which the center is not the center of the second-class pixel point. Optionally, the eighteenth value is 0.
For example, the tenth image shown in fig. 46a is rotated 45 degrees counterclockwise, resulting in the eighteenth image shown in fig. 46 b. The scale of the coordinate axis of the pixel coordinate system of the eighteenth image shown in fig. 46b is reduced by n times, and the sixteenth image shown in fig. 46c is obtained. In the sixteenth image shown in fig. 46c, the centers of the following pixels are not the centers of the second type pixels: pixel point N11Pixel point N13Pixel point N22Pixel point N24Pixel point N31Pixel point N33Pixel point N42Pixel point N44And the pixel values of the pixel points are all eighteenth values.
Because in fig. 46b and 46c, pixel point D13Center and pixel point G12Have the same center, pixel point D24Center and pixel point G12Have the same center, pixel point D12Center and pixel point G21Have the same center, pixel point D23Center and pixel point G23Have the same center, pixel point D22Center and pixel point G32Have the same center, pixel point D33Center and pixel point G34Have the same center, pixel point D21Center and pixel point G41Have the same center, pixel point D32Center and pixel point G43Are the same at the center. Therefore, the pixel point D13Pixel value and pixel point G12Has the same pixel value, pixel point D24Pixel value and pixel point G12Has the same pixel value, pixel point D12Pixel value and pixel point G21Has the same pixel value, pixel point D23Pixel value and pixel point G23Has the same pixel value, pixel point D22Pixel value and pixel point G32Are the same as the pixel values of (a),pixel point D33Pixel value and pixel point G34Has the same pixel value, pixel point D21Pixel value and pixel point G41Has the same pixel value, pixel point D32Pixel value and pixel point G43The pixel values of (a) are the same.
The eighteenth image is obtained by rotating the tenth image, and the sixteenth image is obtained by adjusting the coordinate axis scale of the pixel coordinate system of the eighteenth image, so that the effects of reducing the data processing amount and improving the processing speed can be achieved.
It should be understood that in the drawings in the embodiments of the present application, the second image to be noise-reduced includes R, G, B three channels, and the third channel is a G channel, but in practical applications, the second image to be noise-reduced includes three channels other than R, G, B, and the third channel may not be a G channel. The drawings provided in the embodiments of the present application are only examples and should not be construed as limiting the present application.
It will be understood by those skilled in the art that the above method of the specific embodiments is not meant to be strictly exemplary and should not be construed as limiting the scope of the claims.
The method of the embodiments of the present application is set forth above in detail and the apparatus of the embodiments of the present application is provided below.
Referring to fig. 47, fig. 47 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application, in which the apparatus 1 includes: a first acquiring unit 11, a first processing unit 12, a second processing unit 13, wherein:
the first acquiring unit 11 is configured to acquire a first image to be noise-reduced, where the first image to be noise-reduced includes first-type pixel points;
a first processing unit 12, configured to perform downsampling on the first image to be noise-reduced to obtain a first image, where the first image is a continuous image, the first image includes the first type of pixel points, and a ratio of a resolution of the first image to be noise-reduced is greater than a first threshold;
and the second processing unit 13 is configured to perform noise reduction processing on the first image to obtain a second image.
In combination with any embodiment of the present application, the first processing unit 12 is configured to:
rotating the first image to be denoised by a first angle to obtain a third image, wherein the first angle is an odd multiple of 45 degrees;
magnifying the coordinate axis scale of the first pixel coordinate system by n times to obtain a second pixel coordinate system, wherein the first pixel coordinate system is the pixel coordinate system of the third image;
and determining the pixel value of each pixel point in the second pixel coordinate system according to the pixel value of the pixel point in the third image to obtain the first image.
In combination with any embodiment of the present application, the first type of pixel belongs to a first channel, the first to-be-denoised image further includes a second channel different from the first channel, and the apparatus further includes:
a first extracting unit 14, configured to extract the second channel in the first image to be denoised to obtain a fourth image;
a third processing unit 15, configured to perform upsampling processing on the second image to obtain a fifth image, where a size of the fifth image is the same as that of the first image to be noise-reduced;
a first merging unit 16, configured to take the fourth image and the fifth image as one channel respectively, and merge the fourth image and the fifth image to obtain a sixth image.
With reference to any embodiment of the present application, the third processing unit is configured to:
rotating the second image by a second angle to obtain a seventh image, wherein the second angle is a same angle of a final edge of a third angle, and the third angle is opposite to the first angle;
reducing the coordinate axis scale of a third pixel coordinate system by the factor of n to obtain a fourth pixel coordinate system, wherein the third pixel coordinate system is the pixel coordinate system of the seventh image;
and determining the pixel value of each pixel point in the fourth pixel coordinate system according to the pixel value of the pixel point in the seventh image to obtain the fifth image.
With reference to any embodiment of the present application, the first image to be noise-reduced includes: the first pixel point, the second pixel point, the third pixel point and the fourth pixel point;
the coordinate of the first pixel point is (i, j), the coordinate of the second pixel point is (i +1, j), the coordinate of the third pixel point is (i, j +1), the coordinate of the fourth pixel point is (i +1, j +1), wherein i and j are positive integers;
under the condition that the first pixel point is the first-class pixel point, the second pixel point and the third pixel point are not the first-class pixel point, and the fourth pixel point is the first-class pixel point;
and under the condition that the first pixel point is not the first-class pixel point, the second pixel point and the third pixel point are both the first-class pixel point, and the fourth pixel point is not the first-class pixel point.
In combination with any embodiment of the present application, the arrangement manner of the pixel points in the first image to be denoised is a bayer array.
Referring to fig. 48, fig. 48 is a schematic structural diagram of another image processing apparatus according to an embodiment of the present application, where the apparatus 1 includes: a second obtaining unit 21, a second extracting unit 22, a fourth processing unit 23, and a fifth processing unit 24, wherein:
the second obtaining unit 21 is configured to obtain a second image to be noise-reduced, where the second image to be noise-reduced includes second-type pixel points;
a second extracting unit 22, configured to extract a third channel in the second to-be-denoised image to obtain an eighth image third channel, where the third channel is a channel with the largest number of pixel points included in the second to-be-denoised image;
a fourth processing unit 23, configured to perform downsampling on the eighth image to obtain a ninth image, where the ninth image is a continuous image, the ninth image includes the second type of pixel points, and a ratio of a resolution of the ninth image to a resolution of the second image to be noise-reduced is greater than a second threshold;
and a fifth processing unit 24, configured to perform noise reduction processing on the ninth image to obtain a tenth image.
With reference to any embodiment of the present application, the fourth processing unit 23 is configured to:
rotating the eighth image by a fourth angle to obtain an eleventh image, wherein the fourth angle is an odd multiple of 45 degrees;
magnifying the coordinate axis scale of a fifth pixel coordinate system by m times to obtain a sixth pixel coordinate system, wherein the fifth pixel coordinate system is the pixel coordinate system of the eleventh image;
and determining the pixel value of each pixel point in the sixth pixel coordinate system according to the pixel value of the pixel point in the eleventh image to obtain the ninth image.
With reference to any embodiment of the present application, the fourth processing unit 23 is configured to:
constructing a twelfth image, wherein the twelfth image comprises second-type pixel points of the second-type pixel points in the second image to be denoised;
and reducing the pixel value in the twelfth image by s times to obtain the ninth image.
With reference to any embodiment of the present application, a diagonal line of the second image to be denoised includes a first line segment;
the fourth processing unit 23 is configured to:
arranging at least one second-type pixel point with the center belonging to the same first diagonal line into a line of pixel points of the image according to the sequence of the abscissa from small to large to construct a thirteenth image, wherein the first diagonal line comprises: a line passing through the first line segment, a line parallel to the first line segment;
sequencing rows in the thirteenth image to obtain a twelfth image; or the like, or, alternatively,
arranging at least one second-type pixel point with the center belonging to the same first diagonal line into a column of pixel points of the image according to the sequence of the abscissa from small to large to construct a fourteenth image, wherein the first diagonal line comprises: a line passing through the first line segment, a line parallel to the first line segment;
and sequencing columns in the fourteenth image to obtain the twelfth image.
With reference to any embodiment of the present application, the fourth processing unit 23 is configured to:
determining a first mean value of the ordinate of each row of pixel points in the thirteenth image, and obtaining a first index according to the first mean value, wherein the first mean value and the first index are in positive correlation or negative correlation;
and arranging the rows in the thirteenth image according to the sequence of the first indexes from large to small to obtain the twelfth image.
In combination with any embodiment of the present application, the diagonal line of the second image to be denoised further includes a second line segment different from the first line segment;
the fourth processing unit 23 is configured to:
arranging rows in the thirteenth image according to a first sequence to obtain the twelfth image, wherein the first sequence is a sequence from the great ordinate of the first index pixel point to the small ordinate, and the first sequence or the sequence from the small ordinate of the first index pixel point to the large ordinate includes a pixel point whose center belongs to a first straight line;
under the condition that the second line segment passes through the center of the second type pixel point, the first straight line is a straight line passing through the second line segment;
and under the condition that the second line segment does not exceed the center of the second-type pixel point, the first straight line is the straight line second line segment which is parallel to the second line segment and is closest to the second line segment in the straight lines which pass through the center of the second-type pixel point.
With reference to any embodiment of the present application, the fourth processing unit 23 is configured to:
determining a second average value of the ordinate of each row of pixel points in the fourteenth image, and obtaining a second index according to the second average value, wherein the second average value and the second index are in positive correlation or negative correlation;
and arranging the columns in the fourteenth image according to the descending order of the second index to obtain the twelfth image.
In combination with any embodiment of the present application, the diagonal line of the second image to be denoised further includes a second line segment different from the first line segment;
the fourth processing unit 23 is configured to:
arranging the rows in the fourteenth image according to a second sequence to obtain the twelfth image, wherein the second sequence is the sequence from the great ordinate of the second index pixel point to the small ordinate, and the second sequence or the sequence from the small ordinate of the second index pixel point to the large ordinate includes a pixel point whose center belongs to a second straight line;
under the condition that the second line segment passes through the center of the second type pixel point, the second straight line is a straight line passing through the second line segment;
and when the second line segment does not exceed the center of the second-type pixel point, the second straight line is the straight line which is parallel to the second line segment and is closest to the second line segment from among straight lines which pass through the center of the second-type pixel point.
With reference to any embodiment of the present application, the second image to be noise-reduced further includes a fourth channel different from the third channel, and the second extracting unit 24 is configured to extract the fourth channel in the second image to be noise-reduced to obtain a fifteenth image;
the device 2 further comprises:
a sixth processing unit 25, configured to perform upsampling on the tenth image to obtain a sixteenth image, where a size of the sixteenth image is the same as that of the second image to be noise-reduced;
a second merging unit 26, configured to take the fifteenth image and the sixteenth image as one channel, respectively, and merge the fifteenth image and the sixteenth image to obtain a seventeenth image.
In combination with any embodiment of the present application, the sixth processing unit 25 is configured to:
rotating the tenth image by a fifth angle to obtain an eighteenth image, wherein the fifth angle is a same angle of a final edge of a sixth angle, and the sixth angle and the fourth angle are opposite numbers;
reducing the coordinate axis scale of a seventh pixel coordinate system by m times to obtain an eighth pixel coordinate system, wherein the seventh pixel coordinate system is the pixel coordinate system of the eighteenth image;
and determining the pixel value of each pixel point in the seventh pixel coordinate system according to the pixel value of the pixel point in the eighteenth image to obtain the sixteenth image.
In combination with any embodiment of the present application, the second image to be noise-reduced includes: a fifth pixel point, a sixth pixel point, a seventh pixel point and an eighth pixel point;
the coordinates of the fifth pixel point are (p, q), the coordinates of the sixth pixel point are (p +1, q), the coordinates of the seventh pixel point are (p, q +1), the coordinates of the eighth pixel point are (p +1, q +1), and both p and q are positive integers;
under the condition that the fifth pixel point is the second-type pixel point, the sixth pixel point and the seventh pixel point are not the second-type pixel point, and the eighth pixel point is the second-type pixel point;
and under the condition that the fifth pixel point is not the second-type pixel point, the sixth pixel point and the seventh pixel point are both the second-type pixel point, and the eighth pixel point is not the second-type pixel point.
In combination with any embodiment of the present application, the arrangement manner of the pixel points in the second to-be-denoised image is a bayer array.
In some embodiments, functions of or modules included in the apparatus provided in the embodiments of the present application may be used to execute the method described in the above method embodiments, and specific implementation thereof may refer to the description of the above method embodiments, and for brevity, will not be described again here.
Fig. 49 is a schematic diagram of a hardware structure of an image processing apparatus according to an embodiment of the present disclosure. The image processing apparatus 3 includes a processor 31, a memory 32, an input device 33, and an output device 34. The processor 31, the memory 32, the input device 33 and the output device 34 are coupled by a connector, which includes various interfaces, transmission lines or buses, etc., and the embodiment of the present application is not limited thereto. It should be appreciated that in various embodiments of the present application, coupled refers to being interconnected in a particular manner, including being directly connected or indirectly connected through other devices, such as through various interfaces, transmission lines, buses, and the like.
The processor 31 may be one or more Graphics Processing Units (GPUs), and in the case that the processor 31 is one GPU, the GPU may be a single-core GPU or a multi-core GPU. Alternatively, the processor 31 may be a processor group composed of a plurality of GPUs, and the plurality of processors are coupled to each other through one or more buses. Alternatively, the processor may be other types of processors, and the like, and the embodiments of the present application are not limited.
Memory 32 may be used to store computer program instructions, as well as various types of computer program code for executing the program code of aspects of the present application. Alternatively, the memory includes, but is not limited to, Random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM), or compact disc read-only memory (CD-ROM), which is used for related instructions and data.
The input means 33 are for inputting data and/or signals and the output means 34 are for outputting data and/or signals. The input device 33 and the output device 34 may be separate devices or may be an integral device.
It is understood that, in the embodiment of the present application, the memory 32 may be used to store not only the relevant instructions, but also relevant data, for example, the memory 32 may be used to store the first image to be noise-reduced acquired through the input device 33, or the memory 32 may also be used to store the second image obtained through the processor 31, and the like, and the embodiment of the present application is not limited to the data specifically stored in the memory.
It will be appreciated that fig. 49 shows only a simplified design of an image processing apparatus. In practical applications, the image processing apparatuses may further include other necessary components, including but not limited to any number of input/output devices, processors, memories, etc., and all image processing apparatuses that can implement the embodiments of the present application are within the scope of the present application.
Fig. 50 is a schematic diagram of a hardware structure of another image processing apparatus according to an embodiment of the present application. The image processing apparatus 4 includes a processor 41, a memory 42, an input device 43, and an output device 44. The processor 41, the memory 42, the input device 43 and the output device 44 are coupled by a connector, which includes various interfaces, transmission lines or buses, etc., and the embodiment of the present application is not limited thereto. It should be appreciated that in various embodiments of the present application, coupled refers to being interconnected in a particular manner, including being directly connected or indirectly connected through other devices, such as through various interfaces, transmission lines, buses, and the like.
The processor 41 may be one or more Graphics Processing Units (GPUs), and in the case that the processor 41 is one GPU, the GPU may be a single-core GPU or a multi-core GPU. Alternatively, the processor 31 may be a processor group composed of a plurality of GPUs, and the plurality of processors are coupled to each other through one or more buses. Alternatively, the processor may be other types of processors, and the like, and the embodiments of the present application are not limited.
Memory 42 may be used to store computer program instructions, as well as various types of computer program code for executing the program code of aspects of the present application. Alternatively, the memory includes, but is not limited to, Random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM), or compact disc read-only memory (CD-ROM), which is used for related instructions and data.
The input means 43 are for inputting data and/or signals and the output means 44 are for outputting data and/or signals. The input device 43 and the output device 44 may be separate devices or may be an integral device.
It is understood that, in the embodiment of the present application, the memory 42 may be used to store not only the relevant instructions, but also relevant data, for example, the memory 42 may be used to store the second image to be denoised acquired by the input device 43, or the memory 42 may also be used to store the tenth image obtained by the processor 41, and so on, and the embodiment of the present application is not limited to the data specifically stored in the memory.
It will be appreciated that fig. 50 shows only a simplified design of an image processing apparatus. In practical applications, the image processing apparatuses may further include other necessary components, including but not limited to any number of input/output devices, processors, memories, etc., and all image processing apparatuses that can implement the embodiments of the present application are within the scope of the present application.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again. It is also clear to those skilled in the art that the descriptions of the various embodiments of the present application have different emphasis, and for convenience and brevity of description, the same or similar parts may not be repeated in different embodiments, so that the parts that are not described or not described in detail in a certain embodiment may refer to the descriptions of other embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in or transmitted over a computer-readable storage medium. The computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by wire (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)), or wirelessly (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., a floppy disk, a hard disk, a magnetic tape), an optical medium (e.g., a Digital Versatile Disk (DVD)), or a semiconductor medium (e.g., a Solid State Disk (SSD)), among others.
One of ordinary skill in the art will appreciate that all or part of the processes in the methods of the above embodiments may be implemented by hardware related to instructions of a computer program, which may be stored in a computer-readable storage medium, and when executed, may include the processes of the above method embodiments. And the aforementioned storage medium includes: various media that can store program codes, such as a read-only memory (ROM) or a Random Access Memory (RAM), a magnetic disk, or an optical disk.

Claims (24)

1. An image processing method, characterized in that the method comprises:
acquiring a first image to be denoised, wherein the first image to be denoised comprises first-class pixel points;
performing downsampling processing on the first image to be denoised to obtain a first image, wherein the first image is a continuous image and comprises the first type of pixel points, and the ratio of the resolution of the first image to be denoised is greater than a first threshold;
and carrying out noise reduction processing on the first image to obtain a second image.
2. The method according to claim 1, wherein the down-sampling the first image to be noise-reduced to obtain a first image comprises:
rotating the first image to be denoised by a first angle to obtain a third image, wherein the first angle is an odd multiple of 45 degrees;
magnifying the coordinate axis scale of the first pixel coordinate system by n times to obtain a second pixel coordinate system, wherein the first pixel coordinate system is the pixel coordinate system of the third image;
and determining the pixel value of each pixel point in the second pixel coordinate system according to the pixel value of the pixel point in the third image to obtain the first image.
3. The method according to claim 1 or 2, wherein the first type of pixel belongs to a first channel, the first image to be noise-reduced further comprises a second channel different from the first channel, and the method further comprises:
extracting the second channel in the first image to be denoised to obtain a fourth image;
performing upsampling processing on the second image to obtain a fifth image, wherein the size of the fifth image is the same as that of the first image to be denoised;
and respectively taking the fourth image and the fifth image as a channel, and combining the fourth image and the fifth image to obtain a sixth image.
4. The method of claim 3, where claim 3 is appended to claim 2, wherein the upsampling the second image to obtain a fifth image comprises:
rotating the second image by a second angle to obtain a seventh image, wherein the second angle is a same angle of a final edge of a third angle, and the third angle is opposite to the first angle;
reducing the coordinate axis scale of a third pixel coordinate system by the factor of n to obtain a fourth pixel coordinate system, wherein the third pixel coordinate system is the pixel coordinate system of the seventh image;
and determining the pixel value of each pixel point in the fourth pixel coordinate system according to the pixel value of the pixel point in the seventh image to obtain the fifth image.
5. The method according to any one of claims 1 to 4, characterized in that the first image to be noise-reduced comprises: the first pixel point, the second pixel point, the third pixel point and the fourth pixel point;
the coordinate of the first pixel point is (i, j), the coordinate of the second pixel point is (i +1, j), the coordinate of the third pixel point is (i, j +1), the coordinate of the fourth pixel point is (i +1, j +1), wherein i and j are positive integers;
under the condition that the first pixel point is the first-class pixel point, the second pixel point and the third pixel point are not the first-class pixel point, and the fourth pixel point is the first-class pixel point;
and under the condition that the first pixel point is not the first-class pixel point, the second pixel point and the third pixel point are both the first-class pixel point, and the fourth pixel point is not the first-class pixel point.
6. The method according to claim 5, wherein the arrangement of the pixel points in the first image to be denoised is a Bayer array.
7. An image processing method, characterized in that the method comprises:
acquiring a second image to be denoised, wherein the second image to be denoised comprises second-class pixel points;
extracting a third channel in the second image to be denoised to obtain an eighth image third channel, wherein the third channel is a channel with the largest number of pixel points contained in the second image to be denoised;
performing downsampling processing on the eighth image to obtain a ninth image, wherein the ninth image is a continuous image and comprises the second type of pixel points, and the ratio of the resolution of the ninth image to the resolution of the second image to be denoised is greater than a second threshold;
and carrying out noise reduction processing on the ninth image to obtain a tenth image.
8. The method according to claim 7, wherein the second downsampling the eighth image to obtain a ninth image comprises:
rotating the eighth image by a fourth angle to obtain an eleventh image, wherein the fourth angle is an odd multiple of 45 degrees;
magnifying the coordinate axis scale of a fifth pixel coordinate system by m times to obtain a sixth pixel coordinate system, wherein the fifth pixel coordinate system is the pixel coordinate system of the eleventh image;
and determining the pixel value of each pixel point in the sixth pixel coordinate system according to the pixel value of the pixel point in the eleventh image to obtain the ninth image.
9. The method according to claim 7, wherein the down-sampling the eighth image to obtain a ninth image comprises:
constructing a twelfth image, wherein the twelfth image comprises second-type pixel points of the second-type pixel points in the second image to be denoised;
and reducing the pixel value in the twelfth image by s times to obtain the ninth image.
10. The method of claim 9, wherein the diagonal line of the second noise-reduced image comprises a first line segment;
the constructing the twelfth image comprises:
arranging at least one second-type pixel point with the center belonging to the same first diagonal line into a line of pixel points of the image according to the sequence of the abscissa from small to large to construct a thirteenth image, wherein the first diagonal line comprises: a line passing through the first line segment, a line parallel to the first line segment;
sequencing rows in the thirteenth image to obtain a twelfth image; or the like, or, alternatively,
arranging at least one second-type pixel point with the center belonging to the same first diagonal line into a column of pixel points of the image according to the sequence of the abscissa from small to large to construct a fourteenth image, wherein the first diagonal line comprises: a line passing through the first line segment, a line parallel to the first line segment;
and sequencing columns in the fourteenth image to obtain the twelfth image.
11. The method of claim 10, wherein the sorting the rows in the thirteenth image into the twelfth image comprises:
determining a first mean value of the ordinate of each row of pixel points in the thirteenth image, and obtaining a first index according to the first mean value, wherein the first mean value and the first index are in positive correlation or negative correlation;
and arranging the rows in the thirteenth image according to the sequence of the first indexes from large to small to obtain the twelfth image.
12. The method of claim 10, wherein the diagonal of the second noise-reduced image further comprises a second line segment different from the first line segment;
the sorting the rows in the thirteenth image to obtain the twelfth image comprises:
arranging rows in the thirteenth image according to a first sequence to obtain the twelfth image, wherein the first sequence is a sequence from the great ordinate of the first index pixel point to the small ordinate, and the first sequence or the sequence from the small ordinate of the first index pixel point to the large ordinate includes a pixel point whose center belongs to a first straight line;
under the condition that the second line segment passes through the center of the second type pixel point, the first straight line is a straight line passing through the second line segment;
and under the condition that the second line segment does not exceed the center of the second-type pixel point, the first straight line is the straight line second line segment which is parallel to the second line segment and is closest to the second line segment in the straight lines which pass through the center of the second-type pixel point.
13. The method of claim 10, wherein the sorting the columns in the fourteenth image into the second image comprises:
determining a second average value of the ordinate of each row of pixel points in the fourteenth image, and obtaining a second index according to the second average value, wherein the second average value and the second index are in positive correlation or negative correlation;
and arranging the columns in the fourteenth image according to the descending order of the second index to obtain the twelfth image.
14. The method of claim 10, wherein the diagonal of the second noise-reduced image further comprises a second line segment different from the first line segment;
the sorting the columns in the fourteenth image to obtain the twelfth image comprises:
arranging the rows in the fourteenth image according to a second sequence to obtain the twelfth image, wherein the second sequence is the sequence from the great ordinate of the second index pixel point to the small ordinate, and the second sequence or the sequence from the small ordinate of the second index pixel point to the large ordinate includes a pixel point whose center belongs to a second straight line;
under the condition that the second line segment passes through the center of the second type pixel point, the second straight line is a straight line passing through the second line segment;
and when the second line segment does not exceed the center of the second-type pixel point, the second straight line is the straight line which is parallel to the second line segment and is closest to the second line segment from among straight lines which pass through the center of the second-type pixel point.
15. The method according to any one of claims 7 to 14, wherein the second image to be noise-reduced further comprises a fourth channel different from the third channel, the method further comprising:
extracting the fourth channel in the second image to be denoised to obtain a fifteenth image;
performing upsampling processing on the tenth image to obtain a sixteenth image, wherein the size of the sixteenth image is the same as that of the second image to be denoised;
and respectively taking the fifteenth image and the sixteenth image as a channel, and combining the fifteenth image and the sixteenth image to obtain a seventeenth image.
16. The method according to claim 15, wherein in a case where claim 8 is included in the claims recited in claim 15, the upsampling the tenth image to obtain a sixteenth image includes:
rotating the tenth image by a fifth angle to obtain an eighteenth image, wherein the fifth angle is a same angle of a final edge of a sixth angle, and the sixth angle and the fourth angle are opposite numbers;
reducing the coordinate axis scale of a seventh pixel coordinate system by m times to obtain an eighth pixel coordinate system, wherein the seventh pixel coordinate system is the pixel coordinate system of the eighteenth image;
and determining the pixel value of each pixel point in the seventh pixel coordinate system according to the pixel value of the pixel point in the eighteenth image to obtain the sixteenth image.
17. The method according to any one of claims 7 to 16, wherein the second image to be noise-reduced comprises: a fifth pixel point, a sixth pixel point, a seventh pixel point and an eighth pixel point;
the coordinates of the fifth pixel point are (p, q), the coordinates of the sixth pixel point are (p +1, q), the coordinates of the seventh pixel point are (p, q +1), the coordinates of the eighth pixel point are (p +1, q +1), and both p and q are positive integers;
under the condition that the fifth pixel point is the second-type pixel point, the sixth pixel point and the seventh pixel point are not the second-type pixel point, and the eighth pixel point is the second-type pixel point;
and under the condition that the fifth pixel point is not the second-type pixel point, the sixth pixel point and the seventh pixel point are both the second-type pixel point, and the eighth pixel point is not the second-type pixel point.
18. The method according to any one of claims 7 to 17, wherein the arrangement of the pixel points in the second image to be denoised is a bayer array.
19. An image processing apparatus, characterized in that the apparatus comprises:
the device comprises a first obtaining unit, a second obtaining unit and a processing unit, wherein the first obtaining unit is used for obtaining a first image to be denoised, and the first image to be denoised comprises first type pixel points;
the first processing unit is used for performing downsampling processing on the first image to be denoised to obtain a first image, wherein the first image is a continuous image and comprises the first type of pixel points, and the ratio of the resolution of the first image to be denoised is greater than a first threshold;
and the second processing unit is used for carrying out noise reduction processing on the first image to obtain a second image.
20. An image processing apparatus, characterized in that the apparatus comprises:
the second acquiring unit is used for acquiring a second image to be denoised, wherein the second image to be denoised comprises second-type pixel points;
the second extraction unit is used for extracting a third channel in the second image to be denoised to obtain an eighth image third channel, wherein the third channel is a channel with the largest number of pixel points contained in the second image to be denoised;
a fourth processing unit, configured to perform downsampling on the eighth image to obtain a ninth image, where the ninth image is a continuous image, the ninth image includes the second type of pixel points, and a ratio of a resolution of the ninth image to a resolution of the second to-be-denoised image is greater than a second threshold;
and the fifth processing unit is used for carrying out noise reduction processing on the ninth image to obtain a tenth image.
21. An electronic device, comprising: a processor and a memory for storing computer program code comprising computer instructions which, if executed by the processor, the electronic device performs the method of any of claims 1 to 6.
22. A computer-readable storage medium, in which a computer program is stored, which computer program comprises program instructions which, if executed by a processor, cause the processor to carry out the method of any one of claims 1 to 6.
23. An electronic device, comprising: a processor and a memory for storing computer program code comprising computer instructions which, if executed by the processor, cause the electronic device to perform the method of any of claims 7 to 18.
24. A computer-readable storage medium, in which a computer program is stored, which computer program comprises program instructions which, if executed by a processor, cause the processor to carry out the method of any one of claims 7 to 18.
CN202010615414.0A 2020-06-30 2020-06-30 Image processing method and device, electronic device and storage medium Withdrawn CN111798393A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010615414.0A CN111798393A (en) 2020-06-30 2020-06-30 Image processing method and device, electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010615414.0A CN111798393A (en) 2020-06-30 2020-06-30 Image processing method and device, electronic device and storage medium

Publications (1)

Publication Number Publication Date
CN111798393A true CN111798393A (en) 2020-10-20

Family

ID=72809631

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010615414.0A Withdrawn CN111798393A (en) 2020-06-30 2020-06-30 Image processing method and device, electronic device and storage medium

Country Status (1)

Country Link
CN (1) CN111798393A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111815547A (en) * 2020-06-30 2020-10-23 深圳市慧鲤科技有限公司 Image processing method and device, electronic device and storage medium

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1670765A (en) * 1999-06-16 2005-09-21 西尔弗布鲁克研究股份有限公司 Method of sharpening image using luminance channel
US20070025643A1 (en) * 2005-07-28 2007-02-01 Olivier Le Meur Method and device for generating a sequence of images of reduced size
CN101478692A (en) * 2008-12-25 2009-07-08 昆山锐芯微电子有限公司 Test method and system for image sensor dynamic resolution
CN103716606A (en) * 2013-12-30 2014-04-09 上海富瀚微电子有限公司 Bayer domain image downsampling method and device and camera equipment
CN106713877A (en) * 2017-01-23 2017-05-24 上海兴芯微电子科技有限公司 Interpolating method and apparatus of Bayer-format images
CN107590500A (en) * 2017-07-20 2018-01-16 济南中维世纪科技有限公司 A kind of color recognizing for vehicle id method and device based on color projection classification
CN107945119A (en) * 2017-11-02 2018-04-20 天津大学 Correlated noise method of estimation in image based on bayer-pattern
CN108171657A (en) * 2018-01-26 2018-06-15 上海富瀚微电子股份有限公司 Image interpolation method and device
CN111798497A (en) * 2020-06-30 2020-10-20 深圳市慧鲤科技有限公司 Image processing method and device, electronic device and storage medium
CN111815547A (en) * 2020-06-30 2020-10-23 深圳市慧鲤科技有限公司 Image processing method and device, electronic device and storage medium

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1670765A (en) * 1999-06-16 2005-09-21 西尔弗布鲁克研究股份有限公司 Method of sharpening image using luminance channel
US20070025643A1 (en) * 2005-07-28 2007-02-01 Olivier Le Meur Method and device for generating a sequence of images of reduced size
CN101478692A (en) * 2008-12-25 2009-07-08 昆山锐芯微电子有限公司 Test method and system for image sensor dynamic resolution
CN103716606A (en) * 2013-12-30 2014-04-09 上海富瀚微电子有限公司 Bayer domain image downsampling method and device and camera equipment
CN106713877A (en) * 2017-01-23 2017-05-24 上海兴芯微电子科技有限公司 Interpolating method and apparatus of Bayer-format images
CN107590500A (en) * 2017-07-20 2018-01-16 济南中维世纪科技有限公司 A kind of color recognizing for vehicle id method and device based on color projection classification
CN107945119A (en) * 2017-11-02 2018-04-20 天津大学 Correlated noise method of estimation in image based on bayer-pattern
CN108171657A (en) * 2018-01-26 2018-06-15 上海富瀚微电子股份有限公司 Image interpolation method and device
CN111798497A (en) * 2020-06-30 2020-10-20 深圳市慧鲤科技有限公司 Image processing method and device, electronic device and storage medium
CN111815547A (en) * 2020-06-30 2020-10-23 深圳市慧鲤科技有限公司 Image processing method and device, electronic device and storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
KUO-LIANG CHUNG等: "Novel and Optimal Luma Modification-Based Chroma Downsampling for Bayer Color Filter Array Images", 《 IEEE OPEN JOURNAL OF CIRCUITS AND SYSTEMS》, vol. 1, pages 48 - 59, XP011794018, DOI: 10.1109/OJCAS.2020.2996624 *
朱同华: "CFA图像插值与去噪研究", 《中国优秀硕士学位全文数据库_信息科技辑》, pages 138 - 1959 *
王浩: "箭载可见光成像系统动态范围提高及图像优化算法研究", 《中国博士学位论文全文数据库_工程科技Ⅱ辑》, pages 031 - 10 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111815547A (en) * 2020-06-30 2020-10-23 深圳市慧鲤科技有限公司 Image processing method and device, electronic device and storage medium

Similar Documents

Publication Publication Date Title
US11537873B2 (en) Processing method and system for convolutional neural network, and storage medium
US7764833B2 (en) Method and apparatus for anti-aliasing using floating point subpixel color values and compression of same
CN109166159B (en) Method and device for acquiring dominant tone of image and terminal
EP1958162B1 (en) Vector graphics anti-aliasing
CN109388448B (en) Image display method, display system, and computer-readable storage medium
US7612784B2 (en) Image processor and method, computer program, and recording medium
CN107342037B (en) Data conversion method, device and computer readable storage medium
US9311691B2 (en) Method and device for processing a super-resolution image
US20070097145A1 (en) Method and system for supersampling rasterization of image data
CN111798393A (en) Image processing method and device, electronic device and storage medium
CN110503612B (en) Bit plane based data separation and recombination enhancement method
CN111798497A (en) Image processing method and device, electronic device and storage medium
US9569882B2 (en) Four corner high performance depth test
CN111815547A (en) Image processing method and device, electronic device and storage medium
US7940283B2 (en) Method and apparatus for pixel sampling
WO2023024660A1 (en) Image enhancement method and apparatus
CN114782249A (en) Super-resolution reconstruction method, device and equipment for image and storage medium
CN110580880B (en) RGB (red, green and blue) triangular sub-pixel layout-based sub-pixel rendering method and system and display device
WO2020241337A1 (en) Image processing device
CN113034416A (en) Image processing method and device, electronic device and storage medium
WO2021213664A1 (en) Filtering for rendering
CN110390339A (en) A kind of method for correcting image, device and storage medium
JP2020021329A (en) Image processor
EP4345739A1 (en) Adaptive sharpening for blocks of pixels
JP4766576B2 (en) Drawing method, image generation apparatus, and electronic information device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20201020

WW01 Invention patent application withdrawn after publication